At the end of Dan McCarthy’s blog post, “How to Evaluate a Training Program”, in which he explains his pre-post, survey approach to applying the Kirkpatrick four levels of training evaluation, he asks: Has anyone used a system like this, or something better? What do you think, is it worth the bother?  Since McCarthy asked, here is my response.

Yes. I’ve used a similar “system” of evaluation many times in my career as a program evaluator. However, I don’t recommend this approach any longer for most situations. While better than nothing, for many training programs (as well as for coaching, mentoring, simulations, self-directed learning, etc.), this approach does not produce the information needed to continuously improve performance and achieve business results. There are at least six reasons for this.

First of all, there is low correlation among the four levels (reaction, learning, behavior, and results). This takes nothing away from the contribution that the Kirkpatrick model has made to the field over the past 50 years. I hate to think where employee training would be today if we hadn’t been guided by Donald Kirkpatrick’s thinking. However, just because learners liked the training doesn’t mean that they learned anything new, or will apply what they learned, or will contribute to achieving business results. Knowing one level does not predict outcomes at the other levels.

Second, self-report surveys produce unreliable performance data. What people say they did or will do is often not what actually happens (I know this is not shocking news.). This is not to say that we shouldn’t ask; it just means that we have to interpret those findings very cautiously.

Third, it shouldn’t be a decision to evaluate or not to evaluate. Good training programs measure impact; they hold learners (and their organizations) accountable. That’s part of the learning process. And some form of evaluation should always be done because it reinforces learners learning and contributes to performance improvement.

Fourth, the outcomes of leadership training cannot be fully anticipated. To construct a useful survey, we have to be able to anticipate what will be learned, how it will be applied, and what difference it will make so that we know what questions to ask. However, especially with leadership training, that kind of prediction is nearly impossible. The only way to discover all of the outcomes is through direct observation or in-depth interviews.

Fifth, if we apply the Kirkpatrick model strictly, then we’re not asking if the training program is the right thing to be doing in the first place.  Shouldn’t we ask that question first, before we ask about reactions, learning, behavior, and results?

Sixth, performance improvement is never the result of training, alone. Many organizational factors (e.g., manager support) determine whether impact from that learning will occur. If, as McCarthy suggests, we only compare what people knew before training to what they know and do after training, we still will not know what to do to achieve better results.

In most learning interventions, especially for leadership and management training, Robert O. Success image.php Brinkerhoff’s Success Case Method is a more useful approach to evaluation. I have written about this method in previous blog posts.

Thank you, Dan McCarthy, for asking. 

 

3 Comments