Organizations often evaluate their own programs, as wellthey should. But, in doing so they should be aware that they are probably failing to observe some very important factors. It’s the nature of human beings. We suffer from selective attention. This finding is supported by the research of Daniel Simons and Christopher Chabris who study our ability to see unexpected events.  In their new book, The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us, they report on 10 years of showing videos to people all over the world who failed (including this blogger) to see something unexpected while trying to complete an observation task. Test yourself on their latest video:

[youtube https://www.youtube.com/watch?v=IGQmdoK_ZfY&hl=en_US&fs=1?rel=0&w=380&h=238]

The same phenomenon happens when managers of programs and services (e.g., training, customer service, process improvement) examine the effectiveness of their own interventions. They measure pre-determined attitudes and behaviors. When they do this they are likely to miss unexpected events. For example, in evaluating a diversity program for a client, I found that several managers who had participated in the program later retained employees who they had been intending to fire. A greater understanding and sensitivity to the individual needs of these employees, that they developed in the diversity training, made the difference. This finding was not expected. A study of the usually expected outcomes of diversity programs, such as hiring and promoting more women and minorities, would not have discovered this other very important behavior change. This is not to say that managers should never evaluate their own programs. It's just that when they do they need to be open to the unexpected.

Comment