Think HR Think CHRM
Saturday - 23 Aug 2014

CHRMGlobal.com on LinkedIn
Username : Password: Forgot Password?
Updates
Updates
The Donald Kirkpatrick Model
Human Resources » Training & Education


Chrm Message From: CHRM Total Posts: 178 Join Date:
Rank: Leader Post Date: 07/07/2006 03:55:05 Points: 890 Location: India

Training managers are always hard-pressed to prove the effectiveness of the training programmes they conduct. An update on one of the most popular techniques - the Donald Kirkpatrick model

Organisations are under pressure to justify various expenses. The training budget is, often, not exempted from this purview. There are a number of questions raised on the value derived from training programmes—both directly and indirectly. Business heads and training managers are under pressure to prove the effectiveness of training.

One of the most popular methodologies for measuring training effectiveness was developed by Donald Kirkpatrick. This model articulates a four-step process.

* Level 1: Reactions.

At this level, we measure the participants’ reaction to the programme. This is measured through the use of feedback forms (also termed as “happy-sheets”). It throws light on the level of learner satisfaction. The analysis at this level serves as inputs to the facilitator and training administrator. It enables them to make decisions on continuing the programme, making changes to the content, methodology, etc.

* Level 2: Participant learning.

We measure changes pertaining to knowledge, skill and attitude. These are changes that can be attributed to the training. Facilitators utilise pre-test and post-test measures to check on the learning that has occurred. However, it is important to note that learning at this level does not necessarily translate into application on the job.

Measuring the effectiveness of training at this level is important as it gives an indication about the quantum of change vis-à-vis the learning objectives that were set. It provides critical inputs to fine-tuning the design of the programme. It also serves the important aspect of being a lead indicator for transfer of learning on to the job context.

* Level 3:

Transfer of learning. At this level, we measure the application of the learning in the work context, which is not an easy task. It is not easy to define standards that can be utilised to measure application of learning and there is always this question that preys on the minds of various people: ‘Can all changes be attributed to the training?’

Inputs at this level can come from participants and their supervisors. It makes sense to obtain feedback from the participants on the application of learning on the job. This can be done a few weeks after the programme so that it gives the participants sufficient time to implement what they have learnt. Their inputs can indicate the cause of success or failure; sometimes it is possible that learning was good at level-2, but implementation did not happen due to system-related reasons. It can help the organisation deal with the constraints posed by systems and processes so that they do not come in the way of applying learning.

* Level 4:

 Results.This measures effectiveness of the programme in terms of business objectives. At this level we look at aspects such as increase in productivity, decrease in defects, cycle time reduction, etc.

Many organisations would like to measure effectiveness of training at this level; the fact remains that it is not very easy to do this, as it is improbable that we can show direct linkage. However, it is worthwhile making the attempt even if the linkage at this level is indirect.

It is possible for organisations to measure effectiveness for all programmes at level-1 and level-2. This can be built into the design of the training programme.

I have found that it is easy to measure training programmes related to technical and functional areas at level-3 and level-4. It is not easy to do this with behavioral skills programmes. Organisations that choose to measure training effectiveness can start with the former category before moving to measuring behavioural skills at level-3 and level-4.
I will articulate an example to show how we can measure some training programmes at levels-3 and level-4. Let us consider the case of an IT services company that conducts technical training programmes on products for their service engineers.

Learning at level-2 can be measured at the end of the programme by the use of tests—both written and practical. Measurement at level-3 is possible for these programmes by utilising the wealth of data the organisation will have on calls attended by engineers at various customer sites. This data is generally available in “Call Tracking Systems”.

I have found valuable insights by comparing data pertaining to the period before the training programme and after the training programme. To simplify analysis, we can take a 24-week cycle—12 weeks prior to the training and 12-weeks subsequent to the programme. The data gives a picture on aspects such as:

• How many calls did the engineer attend on the given product prior to and after the programme? We need to analyse this data. If sufficient calls were not taken after the training, is it due to the fact that there were no calls in this category or because the engineer was not confident to take calls?

• Comparison of the average time to complete a call. Did the cycle time to close similar calls reduce?

• Comparison of the quality of the solution, eg did the problem occur again within a specified period?

• Did the engineer change parts when they were not required to be changed? Such speculative change of spares gives an indication of the diagnostic capability of the engineer. Organisations get to know the details of such speculative changes when a so-called defective spare is returned by the repair centre with a statement that there is no problem with it.

The data from the call tracking system and other related data give a clear indication of application on the job. However, I will not attribute all of the transfer of learning to the training. It is possible that the organisation has instituted mechanism such as mentoring, sending new engineers on calls with senior colleagues, etc, to enable them to also learn on the job. Hence the data needs to be interpreted keeping the overall environment in mind.

This data can also be utilised to measure effectiveness at level-4. It is easy to calculate productivity increases and cost savings for the example cited above. The measures from level-3 can be converted into revenue or cost saving figures.

Similarly, it is possible to conduct measurement in the areas of software development, manufacturing area, accounting and other such functional skills. There are prerequisites to conduct effectiveness of training at this level. It is important for the organisation to institute strong indicators to measure performance levels.

There are mechanisms to measure effectiveness of behavioural skills at level-3. These are cumbersome to implement. It needs a fair amount of investment by the organisation in terms of time and money. Organisations that have chosen to implement assessment centres have been able to measure learning at this level. Assessment centre is a large topic on its own and has been kept out of the scope of this article.

Suggestion to organisations that embark on measuring effectiveness of training is to measure all programmes at level-1 and level-2. The measures at level-3 and level-4 can start with the functional skills, before moving on to the behavioural skills programmes.

Regards,

CHRM

"To win...you must stay in the game" - Claude Bristol

Chrm Message From: srini Total Posts: 131 Join Date:  
Rank: Leader Post Date: 07/03/2007 12:30:17 Points: 655 Location: India

I would like to give you some fresh perspectives on measuring training effectiveness in our organisation.

Basically, we work on KIRK PARTICK'S MODEL i.e. LEARNING, REACTION, BEHAVIOUR & RESULTS.

Primarily, we try to get as many as comments from the training feedback forms.

.1. Questions like What are the key learning points from the Training?How do you plan to implement the learning from the programme in your day to day working? will give you an idea on the LEARNING, REACTION & RESULTS part.

.2. With regard to BEHAVIOURAL, its primarily done by observations and views from the Manager or the immediate superiors.

I hope this helped. More perspectives from members are welcome...

srini

Chrm Message From: craig Total Posts: 30 Join Date:  
Rank: Executive Post Date: 07/03/2007 12:34:37 Points: 150 Location: India

Training can be measured in a variety of ways including [List (Items I-V) are in increasing order of business value]:

I - Prior to training
- The number of people that say they need it during the needs assessment process.

- The number of people that sign up for it.

II - At the end of training
- The number of people that attend the session.

- The number of people that paid to attend the session.

- Customer satisfaction (attendees) at end of training.

- Customer satisfaction at end of training when customers know the actual costs of the training.

- A measurable change in knowledge or skill at end of training.

- Ability to solve a "mock" problem at end of training.

- Willingness to try or intent to use the skill/ knowledge at end of training.

III - Delayed impact (non-job)
- Customer satisfaction at X weeks after the end of training.

- Customer satisfaction at X weeks after the training when customers know the actual costs of the training.

- Retention of Knowledge at X weeks after the end of training.

- Ability to solve a "mock" problem at X weeks after end of training.

- Willingness to try (or intent to use) the skill/ knowledge at X weeks after the end of the training.

IV - On the job behavior change
- Trained individuals that self-report that they changed their behavior / used the skill or knowledge on the job after the training (within X months).

- Trained individuals who's managers report that they changed their behavior / used the skill or knowledge on the job after the training (within X months).

- Trained individuals that actually are observed to change their behavior / use the skill or knowledge on the job after the training (within X months).

V - On the job performance change
- Trained individuals that self-report that their actual job performance changed as a result of their changed behavior / skill (within X months).

- Trained individuals who's manager's report that their actual job performance changed as a result of their changed behavior / skill (within X months).

- Trained individuals who's manager's report that their job performance changed (as a result of their changed behavior / skill) either through improved performance appraisal scores or specific notations about the training on the performance appraisal form (within X months).

- Trained individuals that have observable / measurable (improved sales, quality, speed etc.) improvement in their actual job performance as a result of their changed behavior / skill (within X months).

- The performance of employees that are managed by (or are part of the same team with) individuals that went through the training.

- Departmental performance in departments with X % of employees that went through training ROI (Cost/Benefit ratio) of return on training dollar spent (compared to our competition, last year, other offered training, preset goals etc.).

Other measures

- CEO / Top management knowledge of / approval of / or satisfaction with the training program.

- Rank of training seminar in forced ranking by managers of what factors (among miscellaneous staff functions) contributed most to productivity/ profitability improvement.

- Number (or %) of referrals to the training by those who have previously attended the training.

- Additional number of people who were trained (cross-trained) by those who have previously attended the training. And their change in skill/ behavior/ performance.

- Popularity (attendance or ranking) of the program compared to others (for voluntary training programs).

Regards

craig