//
you're reading...
Uncategorized

Training Assessments: Aligning Training Intervention to Business Outcomes


What is training worth?   Take a few moments to think about the question. What is training really worth? According to the ASTD (2012), U.S. organizations spent more than $156 billion on employee learning in 2012. Yet, most learning and development organizations lack the time, resources, and support to measure the business impact of training (Chief Learning Officer Media, 2013). Furthermore, “few line leaders rate [the] L&D function as critical to achieving business outcomes” (The Corporate Executive Board, 2011, p. 2). The gap between L&D investment and business impact is widespread across business function, but particularly acute in profit centers like sales, where more than $20 billion is spent annual on sales training (Canaday, 2012). Indeed, Gschwandtner argues—cited in Canaday (2012)—“improving skills and knowledge is a means to an end. The top priority of sales trainers should…be on improved business results. If we fail to link training to business impact, we’re sabotaging business progress” (p. 2). In fact, Attia, Honeycutt Jr., and Leach (2005), in A Three-Stage Model for Assessing and Improving Sales Force Training and Development, propose an in-depth model to help link training outcomes to business impact, while improving training efforts. What follows in this paper is a summary of Attia, et al.’s (2005) model, followed by a critique and discussion of implementation considerations. This author’s goal is twofold, to inform practice and help mature the model.

Overview

To begin, it is worth understanding the purpose of training: to enhance human performance (Silberman & Auerbach, 2006). To what end? In the massive edifice that is institutional learning and development, human performance is directed towards institutional goals, thus the necessary orientation towards measuring the impact of training on the institution. Of course, measuring impact is not a new phenomenon. Indeed, most organizations use Kirkpatrick’s (1959) four-level model of evaluation—reactions, learning, behavior, and results—to assess training (Attia, et al., 2005; Training Magazine, 2012). However, firm outcomes—results—are the most difficult to assess (Honeycutt Jr. & Stevenson, 1989; Kirkpatrick, 1994; Lupton, Weiss, & Peterson, 1999). Thus, Attia, et al. (2005) evolve Kirkpatrick’s model to integrate prevailing ideas that begin with the end in mind.

Specifically, Attia, et al. (2005) propose a three-stage model—assessment of needs, impact on sales trainees, and impact on firm—encompassing eight assessment areas. Their model differs substantially from Kirkpatrick’s by incorporating an assessment of needs at the beginning of the model. In summary, the authors suggest their model will provide four main benefits: 1) aligns training with strategic objectives, 2) identifies causes for training failure, 3) enables continuous improvement, and 4) determines the investment value of training (Attia, et al., 2005). What follows is a descriptive summary of each stage with implementation considerations as warranted.

Assessment of Needs

The assessment of needs is broken down into two levels: 1) firm and sales force-level needs, and 2) specific salesperson needs (Attia, et al., 2005). Underlying the authors’ proposition is the idea that effective sales training must be aligned with “change initiatives and understood in a strategic context” (Attia, et al., 2005, p. 255). Therefore, the starting point is an assessment of firm-level objectives based on Kaplan & Norton’s (1996) Balanced Scorecard, a strategic management system that aligns strategy, measurement, and execution. Based on the strategic direction of the firm, “sales executives must evaluate the abilities of the sales force and determine if it possesses the capabilities required to fulfill its role in achieving organizational objectives” (Attia, et al., 2005, p. 256).

In fact, a strong point of the authors’ reliance on Balanced Scorecard is the framework’s explicit connection of learning and growth initiatives with vision and strategy (Balanced Scorecard Institute, 2013). Equally, a practitioner’s reliance on Balanced Scorecard may be problematic given different organizations use different strategic management frameworks, and some use none at all. Nevertheless, Attia, et al. (2005) make an important contribution insofar as a formal assessment of firm-level objectives provides the strategic context for training initiatives and provides a baseline for measurement of firm-level outcomes in later stages.

The second assessment area in the assessment of needs stage is sales person-level needs. Attia, et al. (2005) recommend sales managers assess a salespersons need for training based on their deficiencies and whether they will benefit from the training. In addition, the authors suggest that when salesperson level assessment is not possible, sales managers should adopt a segmentation strategy (Attia, et al., 2005).

Assessment of Impact on Sales Trainees

According to Attia, et al. (2005), this stage is designed to measure how training impacts the trainees. There are four assessment areas specific to this stage: 1) reaction to training, 2) knowledge level/knowledge acquisition, 3) transfer facilitation, and 4) transfer of learning. First, the authors recognize the importance of capturing reactions, yet suggest this measure is of limited value as a predictor of learning (Attia, et al., 2005). Second, knowledge level/knowledge acquisition measures the acquisition and retention of new information or attitudes. The authors suggest the best way to measure knowledge acquisition is to baseline prior to the training and measure the difference or change post-training. However, in this author’s experience, sales management does not typically have the patience for pre and post testing and is usually satisfied with level of knowledge assessments.

The third assessment is the transfer facilitation measure; once again, it is a departure from Kirkpatrick’s model (Attia, et al., 2005). The transfer facilitation measure is designed to understand the trainee’s belief in their capabilities and motivation to transfer learning into the organizational context (Attia, et al., 2005). The authors’ inclusion of this measure is worth quoting at length (Attia, et al., 2005):

Given the importance of transfer to the attainment of organizational objectives, and the apparent inability of knowledge assessments alone to predict transfer, we propose that firms assess trainee transfer intentions as well as other variables that identify a training intervention’s effectiveness at facilitating learning transfer. Specifically, this includes investigating a sales trainee’s level of self-efficacy and motivation to transfer. (p. 259)

Finally, transfer of learning measures the extent of behavior change exhibited by trainees in the organizational setting, typically conducted by sales managers, through self-reported means, or content analysis of logs or diaries (Attia, et al., 2005).

Assessment of Impact on the Firm

This stage includes measurement of: 1) the impact of training intervention on firm-level objectives, and 2) the value and return on investment of the training interventions (Attia, et al., 2005). In fact, the two measures go hand-in-hand. The first measure requires an attempt to isolate the impact of training on the original training objectives or targeted business results. The authors describe some of the more typical sales objectives like higher sales volume, lower selling costs, more selling time, and the like. More importantly, they accurately describe the difficulty of making the link between training and firm-level objectives given the sheer number of other factors at play, like the competitive landscape, marketing investment, or economic conditions (Attia, et al., 2005). Thus, the authors recommend an experimental design model with a control group to understand the material difference between groups (Attia, et al., 2005). This author used the control group concept by rolling out training to one geographic segment at a time with sufficient time to measure performance differences. However, sales management is typically under considerable performance pressure, making it difficult to implement.

Finally, Attia, et al. (2005) recommend converting behavior changes and firm-level outcomes, or the value of the training, into dollar amounts and comparing those with the costs of the training. Again, the author’s recommend isolating the effects of training from other potential variables.

Discussion and Critique

Attia, et al. (2005) contribute to the literature significantly by recommending a more comprehensive model than one primarily in use today—namely, Kirkpatrick’s Four Levels of Evaluation. Whereas Kirkpatrick’s (1959) model begins once the training is completed, Attia, et al. (2005) begin at the beginning, with a needs assessment at the firm level. Moreover, their model measures the self-efficacy and motivation of the trainee to apply their new knowledge, both absent in Kirkpatrick’s model. Finally, their three-stage model includes measuring the return on investment of the training intervention. In this author’s opinion, each of the aforementioned innovations makes their model superior to earlier models.

Notwithstanding the comprehensiveness of the authors’ model, there is room for improvement. Indeed, this author believes the model might benefit by taking a broader perspective. First, instead of relying on Balanced Scorecard, the model might be expanded to consider other strategic planning and management systems in use today, like Total Quality Management, Six Sigma, or Lean Six Sigma. Moreover, some guidance should be provided to apply the framework within institutions that do not employ a strategic planning and management system.

Second, Attia, et al.’s (2005) model treats all training interventions similarly. Indeed, their model is applicable across a variety of training scenarios. However, acknowledging the major themes of onboarding, continuous training, just-in-time training for products, and training for transformational change initiatives might change how the model is deployed. For example, it makes little sense to go through the rigor of experimental design for every product rollout. Furthermore, there are recurring themes of value to measure the return on investment of onboarding programs; value themes based on training type might further adoption of the model.

Third, the return on investment measure is one training measure, admittedly designed to help justify the costs of training. Importantly, the authors acknowledge how important it is for every cost center—learning and development included—to measure and justify their value to the organization. Nevertheless, the narrow focus on return on investment is looking at the tree instead of the forest. This author believes we need to see both. In this case, the forest is value. Value to the organization is more than building the training programs that meet discrete objectives, but enhancing human performance, improving speed to market, and innovating, to name but a few examples.

Finally, the three-stage assessment model is oriented on specific training interventions, as opposed to a long-term investment in developing the people in an organization. In this author’s opinion, implementers of the three-stage assessment model should augment it with a developmental measurement capability to measure long-term human growth, like Laske’s Corporate Development Readiness and Effectiveness Measure (Laske & Maynes, 2002). Thus, the same principles and measurement rigor applied to learning, is equally applied to development.

Conclusion

In summary, Attia, et al. (2005) make significant improvements to the standard measurement model in use by most organizations, by linking needs to outcomes, addressing the affective state of trainings, and focusing the end result on the return on investment. In addition, there is significant room for improvement to address additional strategic planning and management frameworks, provide guidance for differing training scenarios, incorporate a return on value perspective, and incorporate a measure of development. However, given the vast amounts of money spent on training in the United States, Attia, et al. (2005) provide an achievable approach to assuring the money is well spent.

References

ASTD. (2012). ASTD 2012 State of the Industry Report Alexandria, VA: ASTD.org.

Attia, A. M., Honeycutt Jr., E. D., & Leach, M. P. (2005). A three-stage model for assessing and improving sales force training and development. Journal of Personal Selling and Sales Management, XXV(3), 253-268.

Balanced Scorecard Institute. (2013). Balanced Scorecard Basics, from http://www.balancedscorecard.org/bscresources/aboutthebalancedscorecard/tabid/55/default.aspx

Canaday, H. (2012). The Transformation of Enterprise Sales Training. San Francisco, CA: Selling Power Magazine.

Chief Learning Officer Media. (2013). Slowly, Steadily Measuring Impact, from http://clomedia.com/articles/view/slowly-steadily-measuring-impact

Honeycutt Jr., E. D., & Stevenson, T. H. (1989). Evaluating sales training programs. Industrial Marketing Management, 18(August), 215-222.

Kaplan, R. S., & Norton, D. P. (1996). Using the balanced scorecard as a strategic management system. Harvard Business Review, 74(1), 75-85.

Kirkpatrick, D. L. (1959). Techniques for evaluating training programs. Journal of the American Society for Training and Development, 13(11), 3-9.

Kirkpatrick, D. L. (1994). Evaluation training programs: the four levels. San Franscisco: Berrett-Koehler.

Laske, O. E., & Maynes, B. (2002). Growing the top management team: Supporting mental growth as a vehicle for promoting organizational learning. Journal of Management Development, 21(9), 702-727.

Lupton, R. A., Weiss, J. E., & Peterson, R. T. (1999). Sales training evaluation model (STEM): A conceptual framework Industrial Marketing Management, 28(January), 73-86.

Silberman, M. L., & Auerbach, C. (2006). Active training : a handbook of techniques, designs, case examples, and tips (3rd ed.). San Francisco: Pfeiffer.

The Corporate Executive Board. (2011). Driving Business Impact. Washington DC: The Corporate Executive Board.

Training Magazine. (2012). Training magazine Ranks 2012 Top 125 Organizations, from http://www.trainingmag.com/article/training-magazine-ranks-2012-top-125-organizations

 

About rjrock

Husband, Father, Friend, Business & Technology Executive, Student, Veteran, Leadership and Communication Scholar, Lifelong Learner, Sailor, Musician, Basketball Player, Camper, Harley Rider, Dog Lover, Lover of the Lived Experience, Coach, Mentor, Tutor

Discussion

No comments yet.

Leave a comment

Blog Stats

  • 7,466 hits

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 111 other subscribers

Top Rated

Fostering Heroes

Community Profile of the Family behind America's Military Working Dogs- Produced for Gonzaga COML 517

Corey's COML adventure

COML 509 Spring 2013

Progressive Culture | Scholars & Rogues

Arts & literature, pop culture, media, photography, sports and climate

purpleideas

words from a purple life

Tom Tonkin's Weblog

Just another WordPress.com weblog

WordPress.com

WordPress.com is the best place for your personal blog or business site.