Evaluation of programs (e.g., training, coaching) and organizations has more to do with facilitating learning than it does with statistics. Typical measures such as ratings of programs, frequency of participation, or fluctuation in revenue are useful only if they become the stimulus for a facilitated discussion of stakeholders that addresses implications for enhancing performance. Good evaluation requires good facilitators.
Kylie Hutchinson writes about this issue in her post on the American Evaluation Association blog, AEA365. She quotes this definition of “facilitator” that appears in the book, Facilitator’s Guide to Participatory Decision-Making:
…an individual who enables groups and organizations to work more effectively; to collaborate and achieve synergy. They are a content neutral party who…can advocate for fair, open, and inclusive procedures to accomplish the group’s work. A facilitator can also be a learning or a dialogue guide to assist a group in thinking deeply about its assumptions, beliefs, and values, and about its systemic processes and context.
As organizations seek to be more accountable and measure the impact of learning interventions, they will need facilitation skills to help people think more deeply about what they believe and what they need to do to improve. Measuring change is part of good evaluation, but unless we can get leaders in an organization to learn from that data and apply that knowledge to improving performance, what’s the point?