Your Source for Learning
Technology, Strategy, and News
ARTICLES       RSS feed RSS feed

Evaluating Training ROI With a Learning Intelligence System

by Mark Place

April 9, 2007

Feature

by Mark Place

April 9, 2007

"One might ask, “Why bother evaluating learning and knowledge transfer? If you evaluate organizational results, you will know if the training was ultimately successful.” The problem with this assertion is that you don’t really know if those results are due to training or to an intervening factor. Additionally, training does not always produce the desired organizational results."

Training is a critical component in any organization’s strategy for innovation and continuous improvement. Yet, training is an area where the actual return-on-investment (ROI) is uncertain. Given the large expenditures for training in many organizations, it is important to develop tools that will help companies answer the following questions and improve the measurement of training effectiveness.

These tools need to provide a methodology to measure, evaluate, and continuously improve training, as well as the organizational and technical infrastructure (systems) to implement the methodology. We would like to know:

  • Is the training program effective?
  • How can we improve the program?
  • Did the program achieve the desired results at the lowest possible cost?

The emerging body of knowledge on transfer of training suggests a number of important propositions and conclusions. For example, the transfer “climate” can have a powerful impact on the extent to which people use newly acquired competencies back on the job. Delays between training and actual use on the job directly relate to skill decay. Social, peer, subordinate, and supervisor support all play a central role in transfer. And finally, it is possible to design intervention strategies to improve the probability of transfer. All four of these intervening factors affect the results of any given training program, and there are potentially other factors.

One might ask, “Why bother evaluating learning and knowledge transfer? If you evaluate organizational results, you will know if the training was ultimately successful.” The problem with this assertion is that you don’t really know if those results are due to training or to an intervening factor. Additionally, training does not always produce the desired organizational results. It is possible to gain important organizational knowledge by finding the causes of failed training.

In this article, I discuss learning intelligence systems and an approach to learning improvement. A learning intelligence system is an important connection between the measures of learning effectiveness that an LMS can provide and the larger enterprise metrics that indicate whether learning transferred. But this is not enough to ensure improvement of results at the enterprise level. For that, we must borrow some ideas from industry.

Measuring organizational result

Calculation of training ROI requires measuring organizational results. Training operation metrics, such as course enrollments and completions, assessment scores, and the results of feedback forms and surveys, may directly relate to those results. Performance data on an individual, department, or business unit level may indicate other results. Unless a training program exists simply for the sake of training, measurements should include actual performance data, not just data about performance during the training. Selected metrics, such as sales, customer satisfaction, workplace safety, productivity, and others, should help demonstrate where training has increased revenue or decreased costs.

For example, corporate health and safety programs train personnel in the hope that they will reduce workplace accidents, whether it is OSHA regulations or the company that mandates the program. Reduction of costs, both tangible and intangible, is implicit in the goal of reducing workplace accidents. While the tangible costs of workplace accidents are fairly straightforward, the company also incurs other less tangible costs in lost productivity that are even more difficult to measure and account for. An accident in the workplace can adversely affect the behavior of uninjured workers in other ways. They may approach a piece of machinery more tentatively, or otherwise work with extreme care to avoid an accident themselves.

Rough ROI measurements that consider performance improvements can provide a benchmark for training effectiveness. After implementing a training initiative or changing an existing program, an organization can observe and record a change in performance. Reduced incidents of workplace accidents would be one example of such a change. The organization could compare the value of the performance change (X dollars saved per reduced incident) to the program’s cost, in order to arrive at an ROI measurement. While this approach appears to answer the question of training program effectiveness, it provides no insight into improving training from either a results or cost perspective.

It also does not provide a means to calculate the ROI that may occur as a result of routine training programs. (How often should workers take the workplace safety program to ensure that accidents remain at the lower level of incidence?)

Evaluating training and performance

Many learning operations evaluate success based on the ubiquitous Kirkpatrick model, or some variation of it. Most organizations that follow this model are unable to evaluate their programs beyond the first two Kirkpatrick levels. (See the article by Tita Beal in the March 26, 2007 issue of Learning Solutions for an explanation of the Kirkpatrick model.) In part, the learning management systems (LMS) that many organizations use make lower-level evaluations easy but don’t provide any mechanism for higher level evaluation.

Most learning management systems will automatically track and report information required for Level One and Level Two analyses. They include assessment tools that can capture each learner’s reaction to the course, and templates that can create reports. For online and blended learning, the Level One assessment (the learner’s reaction) can be completely integral to the course. Likewise, training programs can inexpensively and easily administer pre- and post-tests that evaluate learning results (Level Two).

Levels Three and Four evaluations become more difficult and costly to implement and administer. What makes these Levels so difficult to evaluate? In the case of the first two Levels, data collection occurred during course delivery. When evaluating changes in student behavior and training influence on business results, data collection requirements extend beyond course delivery.

Different evaluation methods can help answer whether students’ behavior actually changed after completing a course or attaining a certification. Data can come from many different sources, including customer satisfaction survey data or performance evaluations by supervisors. Trainers or other designated staff can also observe students and record behavior as the students perform their jobs. To evaluate retention rates, there should be a delay between the training and these behavior measurements.

Organizations rarely evaluate the business impact of a program. This is much more difficult and costly. Large organizations can use a control group to isolate training from other intervening variables. One group does not get training, while another group does. Comparing the business results from each group reveals whether the trained group performed better. Clearly, control groups present a few challenges. Performance measurements must take place over the same time periods. Both the control group and the group receiving training must be nearly identical in make-up, performing similar job functions. Sales groups operating in different sales regions or manufacturing groups working in different shifts will confound the results. With these challenges, structuring a valid control group requires great commitment on the part of the organization.

But to perform a good ROI analysis, an organization must really be evaluating its training at Levels Three and Four. Without these levels, ROI analysis becomes merely an exercise of cost justification for expenditures. Implementing blended learning, for example, may appear to have a high ROI. After all, it’s less expensive than instructor-led, and Level Two assessment scores improve. However, without an understanding of the impact on individual and organizational performance, one must ask what value is the training really delivering to the organization?

Actually, much of the data needed to bridge the gap between training and performance exists in many organizations. Individual performance data exists in performance management systems. Organizational data exists in marketing, sales, and financial systems. Bridging this gap requires a technical infrastructure that minimizes the administrative effort needed to collect and analyze the training and performance data together. However, learning management systems, both the most common repository for training data and most common mechanism to deliver training, cannot easily bridge the gap.

From a functional standpoint, each new LMS release adds more robust reporting and data analysis capability, as well as human resource system integration. In addition, many LMS vendors have added talent and performance management features to their human capital solution suites. To some degree, these evolutionary changes address the gap between training and its impact on individual performance. But they don’t even begin to address the gap with organizational performance.

Why are organizations still unlikely to evaluate training at Kirkpatrick’s Level Three? What path can lead to Level Four evaluations? System integration, one common point of failure, is critical. Many LMS vendors with a history as product companies have limited expertise in system integration that extends beyond learning systems and databases. Successfully managing performance-based training evaluation, however, requires expertise in data management and warehousing, a variety of corporate systems and databases, analytics, and Web-based application development.

Evaluating training ROI most effectively requires the right technical infrastructure and a model of learning improvement.

Learning intelligence system

A reporting and data management strategy that focuses on the LMS as the foundation, only compounds the system integration challenges that make performance-based training evaluation unmanageable. Instead, the organization should adopt a cross-functional corporate reporting and data management strategy. The technical foundation for this strategy is not the LMS, but a learning intelligence system that acts as a broker between an LMS and other corporate systems. The features of a learning intelligence system (see Figure 1) include:

  • Independence from a LMS
  • Cross-functional system integration
  • Alignment to individual and organizational performance
  • Reporting and analytical tools

Figure 1 Learning intelligence system: The functional parts and characteristics of a learning intelligence system


LMS independence

You should not lock a learning intelligence system into a single LMS platform. By utilizing a generic framework, common LMS data should map to variables in the learning intelligence system. LMS independence ensures more stability over time. It minimizes the extent of required system integration in case of LMS upgrade or replacement by another system.

Typically, an organization will feed performance, job code, certification, and other corporate data into the LMS reporting system. By adding a learning intelligence system between the LMS and other corporate systems, the organization only needs to update one data connection if the LMS changes.

Cross-functional system integration

As a broker for business intelligence throughout the organization, a learning intelligence system needs to aggregate the data from multiple corporate systems. If assembling information is too cumbersome and time consuming, and the data is outdated or not even correct, the system cannot enhance ROI evaluations by combining training with other business data. Cross-functional system integration allows the organization to leverage training and business data together in a context-sensitive manner. Technical or political requirements may dictate that decision-makers in different corporate domains access data through different systems. Cross-functional system integration allows the learning intelligence system to push data to the portal or reporting system used by a particular decision-maker

One of the primary challenges when implementing cross-functional system integration is the migration of a diverse range of existing data sources. Different systems, including those of third-party vendors, may process the data feeds, often in a flat file format. It can be difficult to create reports and correct mistakes, especially when the work involves many people exchanging flat files.

If the organization has a corporate data warehouse, the learning management system can push the learning management data into this consolidated data source. Any corporate reporting system can then access this learning data, combine it with other business data, and make more advanced ROI calculations. Different data owners maintain data integrity in the consolidated data source, which provides a unified data access point.

If the organization does not have a centralized data warehouse, the learning intelligence system becomes critical to cross-functional reporting that includes training data. When many different locations contain the data, a learning management system would need to send and receive data through many connections. To manage these different data sources dynamically, a learning intelligence system can receive data from these disparate sources and present it through a common, cross-functional reporting and analytical system.

Although integrating multiple data sources can require significant system integration effort, the organization gains greater control over its learning and business data. Automating the collection of the training data and consolidating it with business data reduces sources of error and ensures accurate and up-to-date information, which users can share more extensively with a minimal degree of administrative effort.


Topics Covered

(7)
Appreciate this!
Google Plusone Twitter LinkedIn Facebook Email Print
Comments

Login or subscribe to comment

Be the first to comment.

Related Articles