As organizations evaluate their programs and services they need to pay attention to process as well as outcomes. Catherine (Brehm) Rain, of Rain and Brehm Consulting Group, Inc., writes this in aea365:

Process evaluation ensures you answer questions of fidelity… did you do what you set out to with respect to needs, population, setting, intervention and delivery? When these questions are answered, a feedback loop is established so that necessary modifications to the program or the evaluation can be made along the way.

Life is a journey—and so is a long-term evaluation. Stuff happens. However, it is often in the chaotic that we find the nugget of truth, the unknown need, or a new direction to better serve constituents. A well-documented process evaluation assists programs to ‘turn on a dime’, adapt to changing environments and issues, and maximize outcome potential.

The problem with much of the evaluation that goes on in organizations, whether that’s evaluation of training and development programs, communication programs, marketing campaigns, a new sales approach, or strategic planning, is that the process is not examined. You might discover what participants thought of the program from “smile sheets” and you might even know how participants applied what they learned if you are fortunate enough to do a follow-up assessment, but none of this tells you what happened and what could be changed to achieve greater impact in the future. This is what process evaluation does.

If you tell me that you provided coaching, or diversity training, or emotional intelligence training, or relationship selling, that doesn’t tell me what actually happened. These interventions could be very different from organization to organization, from department to department, or from day to day. Even if an organization implements a highly structured program such as The Fish! Philosophy with its video, guidebook, playbook, and accessories, the experience could be very different for participants in different organizations with different facilitators at different times. To assess its value to the organization and improve the program for future audiences, we have to know what happened and how that was experienced by participants.

It’s like a friend telling you that she went to Las Vegas, Nevada for a vacation and had a good time. If that’s all you know, then you don’t know very much about her trip. You couldn’t replicate her experience or improve on it for yourself. Maybe she stayed on the Strip and spent every waking moment gambling in casinos. Or maybe she stayed downtown and spent each day on Hoover Dam, Red Rock, and Grand Canyon sight-seeing trips without ever setting foot in a casino. Both experiences might receive a “five” on the vacation evaluation form but that data would be useless to you until you knew what “Las Vegas Vacation” means in her case.

So too with learning interventions, whether classroom, online, or informal, we need to know the actual process of learning (anticipated and unanticipated) in those situations in order to improve the experience for other learners and maximize impact on organizations. It is insufficient to know outcomes without knowing what happened to achieve those outcomes. 

I’ve evaluated employee training programs that included follow-up coaching. Some participants used the coaching and some did not. Some coaching consisted of several, hour-long sessions and some consisted of 15 to 20 minute sessions. Some coaching was done by phone, some was done online and asynchronous, and some was done face-to-face. Some coaching was learner-centered and some was coach-centered. Without knowing how the actual coaching process was delivered, we shouldn’t be making decisions about the program.

Lately, the trend in the employee training and development industry has been to emphasize measurement of results and ROI. While I applaud this recognition that end-of-program, smile sheets are an inadequate measure of the quality of programs, I think measuring outcomes without describing the factors that contributed to those outcomes is also inadequate. We need both process and outcome evaluation. 

 

5 Comments