Surveys say that only 30% of employees are engaged in theirwork, 54% of employees are likely to look for a different job as the economy improves, and 96% of CEOs believe their companies should be doing more to measure the business impact of learning and development programs. Other studies report evidence that pay-for-performance programs increase productivity, diversity programs don’t contribute to the bottom-line, and Millennials are motivated more by flexible schedules than by money. Every day it seems, a new measure of employee attitudes and behavior is reported in the press. These pronouncements are gobbled up by the public and treated as if they are new truths. John Allen Paulos asks us to examine the way in which these studies are done before we believe everything they say. He writes in the New York Times Magazine:

…do we hold an outsize belief in our ability to gauge complex phenomena, measure outcomes and come up with compelling numerical evidence? A well-known quotation usually attributed to Einstein is “Not everything that can be counted counts, and not everything that counts can be counted.” I’d amend it to a less eloquent, more prosaic statement: Unless we know how things are counted, we don’t know if it’s wise to count on the numbers.

Additionally, he writes that we should be wary of the methods used:

No method of measuring a societal phenomenon satisfying certain minimal conditions exists that can’t be second-guessed, deconstructed, cheated, rejected or replaced. This doesn’t mean we shouldn’t be counting — but it does mean we should do so with as much care and wisdom as we can muster.

I think we rely too much on responses to a few survey questions and on counts of a small number of isolated behaviors. That kind of data, as Paulos suggests, cannot give us an accurate picture of the complexity of human behavior and the situations in which people find themselves. 

In many cases, only an employee’s story can give us the information that we need. By “story”, I mean the narrative of how the organization, other people, and events influenced a change in attitudes and behavior and how that change affected business results. My experience is that success stories are more informative than stories about failures, but both types can be useful. Robert O. Brinkerhoff has developed a method that uses success stories to evaluate the impact of training and development programs. René Lavinghouze writes about using success stories to evaluate the progress of disease prevention programs. Stories about people and their experiences often are more accurate and more useful then things that can be counted.   

1 Comment