Viewing entries tagged
outcomes

Comment

Short Course on Evaluation of Training and Learning in Knowledge Economy

LAD Global, in partnership with the Singapore Training and Development Association, has made my short course on evaluation of training and learning available for free online. I hope people involved in talent LAD course on evaluation development will find this course to be a helpful introduction to measuring the impact of all types of learning interventions, not only formal training.  

My emphasis in this course is on using measurement and evaluation for learning. Much of evaluation in organizations today is still focused on formal training programs and limited to Kirkpatrick’s “level one”. In other words, L&D professionals are using “smile sheets” that measure immediate reaction to classroom instruction, collected at the end of training. Of course, we all are curious about what participants think of our programs and us as trainers. Great to know for marketing purposes.

But this information is not particularly helpful to the organization. It doesn’t tell us why the program was the right solution in the first place, what was learned, why that learning is helpful or not helpful to participants and other stakeholders, what happens when participants apply the content in their organizations, what are the intended and unintended consequences, what can be done to ensure that the content is applied in a positive way in the future, what organizational factors beyond the training are affecting impact, and what difference, positive and negative, the training has contributed to achieving organizational goals. This is the kind of information we need if we want to increase the impact of our learning interventions.

Given this purpose, the course covers methods that can be used to measure and evaluate the process of learning in organizations. I summarize three major approaches to evaluation: Kirkpatrick’s four levels; Phillips’ ROI; and Brinkerhoff’s Success Case Method. And then I explain how to select the best method for the situation and how data (quantitative and qualitative) from any of these methods can be used to improve learning across an organization. If you take a look at the course, I welcome your feedback.

For more on this topic, see our new book, Minds at Work: Managing for Success in the Knowledge Economy, published by ATD Press, available now on Amazon.

Comment

Comment

What Gets Measured Gets Done…Revisited Again

The phrase, "what gets measured gets done," has become a rallying cry for trainers and evaluators. We use this to justify our work and convince CEOs that they should invest in performance measurement. However, as I've argued in previous posts, the saying is not always true and, in fact, is misleading.

One implication of the phrase is that if you measure something (customer service, productivity, sales, revenue, etc.), people will pay attention to what is being measured and do what they can to improve Survey responses those outcomes. Many examples refute this logic. GM measures quality of every part and every car yet still has recalled 29 million vehicles so far this year. The Veterans Health Administration measures patient waiting time yet still is under congressional scrutiny for wait times that were much too long. Lehman Brothers, once one of the largest investment banks in the U.S., constantly measured the performance of the securities it owned and managed, yet still had to declare bankruptcy in 2008.

In each of those cases, it appears that key stakeholders had the data but did not use the data to make their decisions. It’s as if they were trying to fulfill a compliance requirement without a commitment to improvement. Or they didn’t want to know because that would mean they would have to change something. Measurement alone is not sufficient; it’s the application of those results to decision-making that gets things done.

A variation on "what gets measured gets done," is, “what you measure is what you get.” To me, this saying has a slightly different meaning. This is more about the importance of choosing the right measure for the situation so that you are reinforcing the intended behavior and not something that you don’t want. I once consulted with a state Blue Cross Blue Shield office that proclaimed their commitment to customer service but evaluated customer service reps on the basis of how many calls they handled each hour. Number of calls handled went up; customer service went down.

Jane Bozarth, in her recent column for Learning Solutions Magazine, writes this about choosing the right measure:

Begin with the end in mind: who is the target audience, what do they need to do, how do we measure whether they are the ones accessing the program, and how do we measure their performance?

So: When looking for measures, try to find things that are meaningful, that give you real information to help real people do their jobs and to help organizations perform more efficiently. Beware of easy measures and vanity metrics.

Good advice! The tendency so often is to look for the lost key under the streetlamp because that’s where the light is. Measures are chosen because “we’ve always done it that way” or because “that’s what we know how to measure” or “that’s what everyone else does.” As Bozarth suggests, decide on what behavior you want and then decide on the best way to measure that behavior. In that way, you’re more likely to get the data and results that you need.

However, here too, the phrase has limits. What you measure is not always what you get. Many organizational factors can intervene. Maybe you are measuring the right things in the best way, but managers don’t value those outcomes, or the findings are not communicated to the stakeholders, or intervening events and unintended consequences are not factored into the results. Again, it’s not measurement per se, but what is done with those measures that makes the difference.

Comment

Comment

What Bothers Chief Learning Officers?

Chief Learning Officers who belong to the Linkedin Learning, Education and Training Professionals Group were asked by Jason Silberman to describe their three biggest “pain points”.  Silberman wrote, “What makes you emotional - what makes you want to punch a pillow?”

While not a scientific survey of Learning Officers, the 97 comments (to date) give us an indication of the kinds of issues that trouble learning leaders in organizations. I’m especially interested in knowing the challenges of these leaders because I’m co-founder of Learning to be Great™, an online marketplace designed to connect leaders with tools and experts who can help them be successful in their jobs.

After reading through the comments by members of the Linkedin group, I identified eight major themes. CLOs worry about…

  1. Lack of organization-wide understanding of the purpose and intended results of a program. MP900448464
    Managers not buying-in to the goals; learners not knowing why they were asked to participate; leaders not seeing the “line of sight” from learning interventions to performance outcomes.
  2. Not knowing what results to expect from learning interventions, whether designed internally or purchased from vendors.
  3. Not having the right training professionals that can provide learning interventions to help the organization be successful. Current training and development staff do not have the competencies needed in their organizations as they are today.
  4. Managers and learners not committed to organizational learning and the learning interventions needed to improve performance. Managers not providing the attention and support that learners need.
  5. Lack of accountability for what happens before and after training that supports learning. Managers not preparing learners and not following-up after the program is over.
  6. Top leadership not valuing employee learning. Their expectations are low and this translates into little involvement and support for learning interventions. They consider training to be a cost, not an investment.
  7. Inadequate design and delivery of learning programs. Not using technologies that could facilitate learning. Not matching content with method.
  8. Lack of employee commitment to their own learning and development. Employees not making optimal use of the learning resources that are offered to them.

One observation that is striking about this list is the absence of a need for more resources. It seems that the “pain” does not come from a lack of time and money but rather how time and money is used. CLOs worry about wasting the resources they have, not trying to acquire more resources. 

Comment

4 Comments

Managing to Outcomes in Nonprofit Sector

This is the traditional time to reflect on trends observedduring the past year. One major trend I have Leap of Reason observed is performance management of nonprofit organizations. Actually, this is an accelerating trend that got its impetus from Peter Drucker decades ago. Nonprofits are becoming more and more focused on what McKinsey & Company calls “managing to outcomes”.  

I know, you thought all organizations managed to outcomes. Not so, particularly in the social sector. Nonprofits have been more like manage-to-what-makes-you-feel-good or manage-to-spend-the-money. It has been less about results and more about the process of doing the work and providing services. Now a shift is occurring towards greater recognition of the importance of measuring outcomes and using that information for learning. That is, learning how to be a more effective organization and how to increase impact of programs and services.

If your organization is involved in philanthropic activity, if you serve on the board of a nonprofit organization (or NGO), or if you work for a nonprofit or a philanthropic foundation, or if you donate to nonprofits, then you should pay special attention to this trend. It will affect what you do and how your money is spent.

This intensified focus on outcomes and learning is a result of a confluence of factors in the growing and increasingly important U.S. nonprofit sector (According to the Foundation Center, “In 2010, nonprofits contributed $804.8 billion to the gross domestic product (GDP); this equates to 5.5 percent of GDP… the nonprofit sector employed 13.7 million people.”). The factors that are contributing to the change include:

  • Increasing demand for services for the poor, the disadvantaged, the elderly, and the marginalized
  • Less public money for social programs (Mario Morino calls this the “Age of Scarcity”.)
  • More people, and a wider spectrum of people, interested in “doing good”
  • Greater demand by social investors for evidence of impact
  • Greater demand from policy-makers for evidence of impact
  • More leaders on nonprofit and foundation boards who want to see results
  • Increasing professionalization of the sector (e.g., MBAs leading large nonprofits)

So the need for services is growing while resources are dwindling and, at the same time, funders and boards of directors want more accountability and more impact from their investment. Because of all of these factors, there is pressure on the sector to develop more efficient and effective organizations. These are organizations that embrace a culture of measurement and performance improvement. Mary Winkler of the Urban Institute emphasized this point when she said:

…organizational culture and a predisposition to measurement and managing toward results is perhaps the single most important ingredient to success. A culture of continuous improvement needs to be evidenced at the top. Equally important, however, is the extent to which the culture of continuous improvement is integrated at every level of the organization. 

That means having leaders who are willing to learn from what they do and apply that learning to improving organizations, as well as their programs and services. This is not an easy transition for many of these leaders who have been accountable for only process and output in the past. They will need the support of funders, staff, constituents, consultants, and the public at large, to make this change.

4 Comments

5 Comments

Evaluate Learning Process

As organizations evaluate their programs and services they need to pay attention to process as well as outcomes. Catherine (Brehm) Rain, of Rain and Brehm Consulting Group, Inc., writes this in aea365:

Process evaluation ensures you answer questions of fidelity… did you do what you set out to with respect to needs, population, setting, intervention and delivery? When these questions are answered, a feedback loop is established so that necessary modifications to the program or the evaluation can be made along the way.

Life is a journey—and so is a long-term evaluation. Stuff happens. However, it is often in the chaotic that we find the nugget of truth, the unknown need, or a new direction to better serve constituents. A well-documented process evaluation assists programs to ‘turn on a dime’, adapt to changing environments and issues, and maximize outcome potential.

The problem with much of the evaluation that goes on in organizations, whether that’s evaluation of training and development programs, communication programs, marketing campaigns, a new sales approach, or strategic planning, is that the process is not examined. You might discover what participants thought of the program from “smile sheets” and you might even know how participants applied what they learned if you are fortunate enough to do a follow-up assessment, but none of this tells you what happened and what could be changed to achieve greater impact in the future. This is what process evaluation does.

If you tell me that you provided coaching, or diversity training, or emotional intelligence training, or relationship selling, that doesn’t tell me what actually happened. These interventions could be very different from organization to organization, from department to department, or from day to day. Even if an organization implements a highly structured program such as The Fish! Philosophy with its video, guidebook, playbook, and accessories, the experience could be very different for participants in different organizations with different facilitators at different times. To assess its value to the organization and improve the program for future audiences, we have to know what happened and how that was experienced by participants.

It’s like a friend telling you that she went to Las Vegas, Nevada for a vacation and had a good time. If that’s all you know, then you don’t know very much about her trip. You couldn’t replicate her experience or improve on it for yourself. Maybe she stayed on the Strip and spent every waking moment gambling in casinos. Or maybe she stayed downtown and spent each day on Hoover Dam, Red Rock, and Grand Canyon sight-seeing trips without ever setting foot in a casino. Both experiences might receive a “five” on the vacation evaluation form but that data would be useless to you until you knew what “Las Vegas Vacation” means in her case.

So too with learning interventions, whether classroom, online, or informal, we need to know the actual process of learning (anticipated and unanticipated) in those situations in order to improve the experience for other learners and maximize impact on organizations. It is insufficient to know outcomes without knowing what happened to achieve those outcomes. 

I’ve evaluated employee training programs that included follow-up coaching. Some participants used the coaching and some did not. Some coaching consisted of several, hour-long sessions and some consisted of 15 to 20 minute sessions. Some coaching was done by phone, some was done online and asynchronous, and some was done face-to-face. Some coaching was learner-centered and some was coach-centered. Without knowing how the actual coaching process was delivered, we shouldn’t be making decisions about the program.

Lately, the trend in the employee training and development industry has been to emphasize measurement of results and ROI. While I applaud this recognition that end-of-program, smile sheets are an inadequate measure of the quality of programs, I think measuring outcomes without describing the factors that contributed to those outcomes is also inadequate. We need both process and outcome evaluation. 

 

5 Comments