If you’re not going to evaluate a leadership development program, don’t do the program! It will be a waste of time, money, energy, and trust. End-of-program reactionnaires (aka smile sheets) don’t count as evaluation. I’m talking about a systematic, evidence-based look at why it was done, what was done, how it was done, what happened as a result, how it can be improved, and what the organization learned from the process.
Viv Nunn of UK’s Open University, in an article for TrainingZone, explains some of the reasons for evaluating professional development programs. She writes that evaluation should provide the following:
- Evidence of the extent to which the professional development is contributing to your organisation's success
- Validation that the correct learning solution has been identified and suggestions for programme improvement
- Advice about how to get the most from your L&D budget by considering the workplace learning environment
I would add:
- Reinforcement of learning. That is, the act of evaluating (i.e., observing) a leadership development program reinforces learning. If done well, evaluation heightens awareness of goals, unintended consequences, and the knowledge, skills, and attitudes that were learned.
To achieve these purposes, you have to look beyond the seminar, workshop, internship, mentoring, coaching, online course, or any other method of instruction. Learning, especially about leadership, is affected by the organizational environment and external factors, as well as the program. To fully understand why and how learning did or did not occur and what impact it had, you have to examine organizational factors in addition to program factors.
Nunn provides us with an excellent description of the factors that can affect learning:
If the workplace learning environment isn't right, if learners cannot find opportunities to apply what they have learned at work, if the organisational culture is not one which encourages them to try out their new skills and knowledge, there will be little or no impact to report, regardless of how effective the actual learning solution has been. Isolating learning from its context will not provide useful data alone. Evaluating the context of learning along with the learning solution will give you the greatest opportunity to get the most from your investment.
I hate to think about how many leadership development programs I’ve participated in as either a learner or facilitator that were thought-provoking, information-rich, and emotionally powerful only to find that they resulted in little or no change because there was no immediate opportunity to apply that learning back on the job. Participants might learn about team-building but not have a team to lead in the workplace or be discouraged by management from applying what they learned in their teams.
Evaluation of professional development must examine how workplace culture supports or hinders learning. Knowing whether intended results follow, in time, a professional development program or not, won't help you unless you understand the factors that affected those results. And, as Nunn suggests, it's things like clarity of goals, manager's attitudes toward learning, incentives for professional development, opportunities to apply new learning, and feedback, that have more to say about performance improvement than does the quality of instruction alone.
Evaluation is not an option; it’s an integral part of the learning process. If you want a leadership development program to be more than entertainment and you want it to achieve learning that results in significant performance improvement, than you must evaluate the program and the organizational evironment of that program.