TRAINING AND PERFORMANCE APPRAISAL
4
Design Training Systematically and Follow the Science of Training
EDUARDO SALAS AND KEVIN C. STAGL
A national education crisis, employment levels topping 94%, a growing retiree bubble, and the rapid rise of emerging market opportunities are draining an already shallow domestic talent pool. And the scarcity of workers with cultural competence, interpersonal savvy, and technological acumen is not just a US problem as 41% of 37,000 employers across 27 countries report experiencing human capital difficulties (Manpower, 2007).
In response, US employers invest over $126 billion annually on training and development initiatives (Paradise, 2007); more than double their $55 billion annual investment just a decade ago (Bassi and Van Buren, 1998). Employers make sizable investments in training because it is a powerful lever for structuring and guiding experiences that facilitate the acquisition of affective, behavioral, and cognitive (ABCs) learning outcomes by employees (Kraiger, Ford, and Salas, 1993). In turn, learning outcomes can horizontally transfer to the workplace and over time transfer vertically to impact key organizational outcomes (Kozlowski, Brown, Weissbein, Cannon-Bowers, and Salas, 2000). In fact, systematically designed training can even help improve entire national economies (Aguinis and Kraiger, in press).
While trillions are spent annually worldwide on training activities, 52% of employers still report struggling to rapidly develop skills and only 13% claim to have a very clear understanding of the capabilities they need in the next three to five years (IBM, 2008). Moreover, only 27% find web-based training, and a mere 17% virtual classroom training, to be effective at meeting their needs (IBM). Perhaps this is not surprising given that estimates suggest only 10% of training expenditures transfer to the job (Georgenson, 1982) and a meager 5% of solutions are evaluated in terms of organizational benefits (Swanson, 2001). It seems now, more than ever before, there is a need for actionable guidance on designing training systematically.
Fortunately, the science of training has benefited from an explosion of research activity since the late 1990s and has much to contribute to ensuring the vitality of organizations and the domestic and global economies they fuel. For example, Salas and Cannon-Bowers (2000) tapped the science of training to extract fundamental principles and advance targeted guidance for designing systematic training in the first edition of this book series.
Almost a decade later, this chapter continues the tradition by providing a translation mechanism for stakeholders charged with fostering more effective individuals, teams, and organizations via systematic training initiatives. Specifically, we advance a set of theories, principles, guidelines, best practice specifications, and lessons learned that address some of the many linkages among training problems, theories, techniques, and tools.
Our discussion follows a four phase process to designing blended training solutions. We use our own experiences facilitating learning, and draw heavily upon the science of training, to advance phase-specific guidelines for designing, developing, delivering, and evaluating training solutions (Table 4.1). We cannot overstate the contribution of colleagues, who represent many disciplines, to shaping our thinking on training issues. Next, we describe the success of team training in the aviation industry and discuss the lessons learned from a failure to develop a sales force. We conclude by presenting two scenario-based exercises crafted to impart knowledge about designing and evaluating learning solutions.
Training needs analysis can be the most important phase of training design because its success depends on an intensive collaborative partnership between key stakeholders. The charge of this partnership is to clarify the purposes of training, illuminate the organizational context, define effective performance and its drivers, and begin to cultivate a climate of learning. Essential activities conducted during the needs analysis phase include: (a) conducting training due diligence, (b) defining performance functions and processes, (c) defining affective and cognitive states, (d) defining an attribute model, and (e) delineating learning objectives. When executed with care, these activities can help ensure remaining phases yield a meaningful learning solution.
Conduct due diligence
Training methods and techniques are not interchangeable or universally applicable, as evidenced by recent meta-analytic findings (Klein, Stagl, Salas, Burke, DiazGranados, Goodwin, and Halpin, 2007). For example, the same learning solution may be differentially effective if it is implemented to address short-, mid- and/or long-term business objectives. Hence, effective instruction in one setting may prove counterproductive elsewhere. This is why it is critical to describe the specific challenges and opportunities training will address; and thereby defining what, and for whom, benefits will accrue.
Due diligence is a process for clarifying and quantifying the expected benefits from training for individuals, teams, and higher-level units (division, organization, society). The purpose of the process is to gather the information required to have an objective and dispassionate dialog about whether or when a particular solution should be institutionalized. And the conversation must encompass more than just performance, productivity, and profitability concerns, as training can also be a powerful lever for enhancing performance-related factors such as employee satisfaction, team cohesion, social capital, and organizational reputation.
Table 4.1 Summary of training phases, principles and guidelines
A core component of the due diligence process is a pretraining transfer analysis. The analysis helps describe the dimensions targeted for horizontal transfer, the emergent processes of vertical transfer, and the contextual factors that may promote or impinge on the transfer process. For example, the number, scope, and nature of the salient nesting arrangements in an organization must be mapped to determine their potential effects on nested variables (Mathieu, Maynard, Taylor, Gilson, and Ruddy, 2007). Designers should be careful to distinguish between objective situational characteristics and social- psychological perceptions of organizational factors as well as evaluate the embeddedness or bond strength of key dimensions (Kozlowski and Salas, 1997).
Organizational leaders can actively contribute to a climate for learning, or passively inhibit the replication of learned behaviors in the workplace. A thorough stakeholder analysis can identify champions of training and provides a forum for airing concerns. This is important because training, like all initiatives, involves the allocation of limited financial resources which some may feel are better routed to the production of goods or services, marketing campaigns, infrastructure improvements, and/or technology upgrades. Yet, in order for training to be successful there must be both sufficient financial and personal support for it. Hence, parties on both sides of the isle are best identified and, when appropriate, persuaded in advance. Favorable projections of the net present value of training relative to other capital investments can be particularly persuasive evidence when estimates are based on realistic and conservative inputs.
A second training needs analysis activity involves defining performance requirements. Established theories of performance, and taxonomies of performance processes, should be leveraged to precisely define the nature of performance (see Campbell and Kuncel, 2002; Marks, Mathieu, and Zaccaro, 2001; Salas, Stagl, Burke, and Goodwin, 2007). This involves describing, disaggregating, and contextualizing the taskwork and teamwork processes that are critical to overall performance. Behavioral- and cognitive-oriented task inventories, critical incident interviews, focus groups, and card sorts can each help nuance key factors. Protocol analysis whereby experts verbalize their thoughts during problem solving is also an especially useful technique for eliciting decision-making processes in natural settings (Ericsson, in press). Training designers should also take steps to model and minimize the systematic and random sources of error inherent to job analysis data.
Once key performance dimensions are defined they should be bracketed by mapping their antecedents and moderators within and outside the focal level of interest (Hackman, 1999). For example, the effectiveness of team coordination can be predicated upon motivated members at an individual level and investments in information technologies at an organizational level. The relative importance of these factors to alternative short-, mid-, and long-term business scenarios should also be illuminated so that specific criteria can be better targeted for improvement. It is also essential to map the projected trajectory of trainee change in these factors over time. Describing the transitional process from novice to expert provides insight into how training content should be developed, delivered, and evaluated.
Define cognitive and affective states
Taskwork and teamwork processes are not executed in isolation. As employees enact performance processes (e.g. situation assessment) they dynamically draw upon and revise their cognitive (e.g. mental models, situation awareness) and affective (e.g. self-efficacy, motivation) states. Designers charged with creating training solutions must describe and frame these states, specify why and how they enable effective performance, and forge instructional experiences that appropriately target them for development. For example, both the content (e.g. declarative knowledge, procedural knowledge) and types of mental models (e.g. situation, task, equipment) should be delineated. Subject matter experts asked to complete an event-based knowledge-elicitation process can provide information that helps identify the states that should be targeted for development by training (Fowlkes, Salas, Baker, Cannon-Bowers, and Stout, 2000).
Once essential states are identified, it is important to determine if, and the extent to which, cognition and affect must be shared or be complementary to enable effective performance in the workplace. This is a particularly acute concern in team training settings because scholars often invoke shared mental models and shared affect to explain how collectives execute both routine and adaptive team performance (Burke, Stagl, Salas, Pierce, and Kendall, 2006). Moreover, cultivating shared affect via training simulations can help prepare teams to navigate even unprecedented challenges (see Klein, Stagl, Salas, Parker, and Van Eynde, 2007).
The unfolding compositional or compilational process via which the content and structure of cognition and affect emerge to the unit level should be clearly specified because it provides insight about the kinds of instructional methods, features, and tools required to facilitate the development of team states. For example, recent meta-analytic evidence suggests that cross-training teams is particularly well suited for imparting shared taskwork and teamwork mental models by providing members with knowledge about their teammates’ tasks, roles, and responsibilities (Stagl, Klein, Rosopa, DiazGranados, Salas, and Burke, unpublished manuscript).
Define KSA attributes
In addition to framing the core processes and cognitive and affective states that collectively comprise affective performance, training practitioners must also define an attribute model. Attribute models specify the direct determinants of performance such as knowledge, skills, and attitudes (KSAs). Training designers should leverage structured attribute inventories, skill repositories, and even performance records to shed light on the KSAs that should be targeted for development by a training solution. For example, the declarative (i.e. what), procedural (i.e. how), and strategic (i.e. why) knowledge required to effectively execute performance processes must be defined. Strategic knowledge is especially important because it allows trainees to understand why and when to apply declarative knowledge (Kozlowski, Gully, Brown, Salas, Smith, and Nason, 2001). This example illustrates that KSAs must first be identified and then ordered in a sequence from those that are more fundamental to those that are more complex in order to maximize the benefit of sequenced learning opportunities.
Training designers should be cognizant that not all of the myriad of characteristics and capabilities can be targeted for development via a single training solution; and that not all KSAs are best developed via training methods (Campbell and Kuncel, 2002). Rather, the needs analysis process should allow for the identification of those KSAs that are most essential to performance at multiple levels with greater emphasis given to dimensions that are most amenable to change via an instructional experience as it is projected to be introduced in a particular setting. The opinions and insights of subject matter experts and past trainees can help practitioners hone in on key attributes. Multiple perspectives should also be leveraged when defining which KSAs are most relevant to organizational success given alternative business models.
Delineate learning objectives
The final step in analyzing training needs involves delineating learning objectives. The information gathered from the prior steps of the need analysis process must be translated into training objectives, learning objectives, and enabling objectives. In practice, task statements are often transformed into learning objectives by supplementing them with contextual information and performance standards. It is important to remember, however, that designers should not force learning objectives into behaviorally based statements of employee actions; as cognitive and affective learning objectives are equally important to fostering effective performance (Kraiger, 2002). Suitable learning objectives are clear, concise, and measurable. To the extent that these three criteria are met, instructional content will be more targeted and ultimately more useful.
The second phase of designing a training solution involves a series of activities undertaken in support of developing training content, including: (a) designing a learning architecture, (b) creating instructional experiences, and (c) developing assessment tools. For the purposes of the present discussion it is assumed training is not confined to classroom walls, in-house or otherwise, but rather is delivered via a blended learning solution that encompasses multiple mediums and locations. While blended learning solutions typically include a dedicated block of classroom time, they also include other instructional mediums such as desktop computer- or web-based learning modules and sometimes even full-task simulators. Interestingly, the findings of a recent survey of 400 human resource executives across 40 countries suggest blended learning solutions are considered the most effective approach for meeting training needs (IBM, 2008). The specific content of a given training solution will vary widely from setting to setting given organizational objectives, financial constraints, and needs.
Design learning architecture
A learning architecture is comprised of several integrated subsystems which collectively provide the capability to plan, select, author, sequence, push, evaluate, store and mine learning content, techniques, assessment algorithms, KSA profiles, and performance records. An intelligent scenario management system can be programmed to provide instructional designers, instructors, and trainees with the access, tools, and guidance required to create and change content to reflect operational challenges (Zachary, Bilazarian, Burns, and Cannon-Bowers, 1997).
An intuitive dashboard interface (a graphical user interface with dropdown menus) with simple navigation can be designed to control the content, sequence, and pace of training. For example, employees with limited time for training may occasionally need access to quick refresher tutorials rather than more intensive mastery learning driven lessons. This example hints at what has too long been the holy grail of learning architectures (namely, computer adaptive training). The computing power to fulfill this quest is both readily available and increasingly affordable; so learning architectures should be designed to account for learner differences in goals, competencies, and capabilities and adjust accordingly.
With sufficient development time and psychometric expertise, architectures can be programmed to capitalize on the exponential gains in learning that can result from accounting for aptitude-treatment interactions (Cronbach and Snow, 1977). Embedded tools can assess a trainee’s expertise, abilities, self-efficacy, and goal orientation and tailor training structure, content, feedback, and guidance to maximize the benefits accrued by learners. While the costs of such an approach are front loaded, often requiring substantially more content to be generated, scaled and sequenced, the horizontal and vertical rewards of computer adaptive training will ultimately deliver the knockout blow to static, out-of-the-box solutions.
Learning architectures can also incorporate additional useful features like text-to-speech conversion, speech recognition, information visualization, perceptual contrasts, and playback. A system that is comprised of several subsystems can highlight important events and cues (Stout, Salas, and Fowlkes, 1997) and pose questions and hints to learners (Lajoie, in press), thereby prompting self-regulatory activities (Bell and Kozlowski, 2002). The encoding and storage capabilities of some architectures can also dynamically capture instructor ratings, comments, and debriefing markers (Smith-Jentsch, Zeisig, Acton, and McPherson, 1998). Systems can also capture and upload information on an intranet or the internet so learners can leverage on-demand tutors, chat rooms, forums, and lessons learned repositories.
Forge instructional experiences
The most important step of developing training content involves forging and blending instructional experiences. The process includes outlining an instructional management plan, instructor guides, and when necessary detailed scripts. This stage is another important juncture at which instructional designers can lean on and learn from present or past trainees to help illuminate the multiple branching paths trainees may take when pursuing the attainment of learning objectives (Lajoie, in press). Mapping the potential paths, including some of those less traveled by, is essential to accurately forecast when trainees are likely to falter, crafting meaningful learning experiences and tailoring appropriate guidance.
When a myriad of branching paths preclude precisely defining a single overarching chronology of experiences, the most effective alternative event timelines should be mapped to help inform scenario sequencing, assessment, and feedback. Content in even high learner control environments must be sequenced to some extent because the development of knowledge structures and complex performance processes are contingent on the prior acquisition and chunking of more fundamental knowledge and skills (Anderson, 1993). This means it is essential to impart the capabilities underlying component tasks prior to developing KSAs underpinning linking tasks (Goldstein and Ford, 2002). For example, materials presenting general rules and principles should precede those highlighting the structural, functional, and physical relationships among systems. Scenarios, and the experiences they impart, can be sequenced to reflect increasingly complex operational realities (Gagné, Briggs, and Wager, 1988).
When instructional content is integrated and orchestrated it serves to foster holistic, meaningful experiences for trainees. For example, contextually grounded scenarios help trainees better understand their learning experience by replicating a familiar workplace. Unfortunately, stakeholders are too often preoccupied with representing, to the greatest degree possible, the operational context via high fidelity technologies. This can be a mistake because physical fidelity is often a secondary concern to the psychological fidelity induced by instructional experiences. The primacy of psychological fidelity is seen in the effectiveness with which computer-based training solutions can supplement, and for some soldiers substitute, for the large-scale military exercises conducted at the US Army’s National Training Center (Chatham, in press, A).
Forging effective instructional experiences requires a systematic consideration of the methods via which information is delivered in training settings. Three of the most common means of packaging training content are via information presentation, demonstration, and practice. For example, lectures, exercises, case studies, and games can be used to present information to trainees. In terms of demonstrating or modeling key skills, role-plays, motion pictures, closed-circuit television, and interactive multimedia displays can be used in conjunction or separately. Trainee attendance to models can be increased by matching models to the demographic characteristics of trainees and by varying model competence.
In terms of scheduling opportunities for trainees to practice, initially alloting time for massed practice followed by a variable practice schedule can be effective for developing the capabilities required to perform complex tasks. It has also been suggested that interleaving, or providing information, demonstration, and practice across multiple mediums, on a single or small cluster of similar tasks, is more effective than blocking practice on separate tasks (Bjork, in press). Instructional designers should also ensure that there is sufficient spacing between separate training modules and between lessons and learning assessment.
In terms of the types of practice scenarios that instructional developers should craft, novice learners should be exposed to routine obstacles that must be navigated. As trainees move along the trajectory of development, practice difficulty can be increased to simulate complex challenges in increasingly incongruent environments. This requires training designers to systematically specify the dimensions along which training scenarios will become more complex and fluid. Finally, emergency situations and crisis events should be designed so that trainees are forced to persevere through adversity as their expertise accrues (McKinney and Davis, 2003).
Develop assessment tools
Once meaningful instructional experiences have been forged, assessment tools and techniques must be developed to operationalize key learning constructs. A comprehensive treatment of the application of psychometric theory to creating assessment tools and assessing the process and outcomes of learning is beyond the scope of the present discussion. Rather, only broad prescriptions are presented. Training designers are strongly encouraged to seek the consultation of subject matter experts when designing assessment tools, as poorly designed or improperly timed metrics can make even good training appear bad and bad training turn ugly.
The most straightforward guidance is to develop standardized measures of unitary constructs; assess multiple learning outcomes and performance processes; and triangulate the measurement of outcomes via multiple assessment methods (Nunnally and Bernstein, 1994). Following these truisms helps yield the tools required to understand the effects of training. For instance, the findings of an evaluation study may suggest that trainees acquired a great deal of factual information (namely, declarative knowledge) but have no concept of why and when to apply specific facts in context (namely, strategic knowledge). Without reliable measures of different knowledge types, stakeholders evaluating the training solution would be left pondering why transfer failed.
Fortunately, comprehensive guidance for designing and applying assessment tools in training initiatives is available to interested readers. For example, recent reviews of assessing learning outcomes have discussed the use of concept maps, card sorts, and pair-wise comparisons for scaling trainee’s knowledge structures (Stagl, Salas, and Day, 2007). The benefits of situational judgment tests for training needs assessment, content delivery, and evaluation have also been discussed at length elsewhere (Fritzsche, Stagl, Salas, and Burke, 2006).
Implementation is a key phase in the training process, in part because it is tightly bound to the organizational system in which training is conducted. More specifically, there are three major activities associated with training implementation, including: (a) setting the stage for learning, (b) delivering a blended learning solution, and (c) supporting transfer and maintenance. The former and latter activities involve actions taken to foster a climate for learning.
Set the stage for learning
Setting the stage for learning begins by ensuring trainers are properly prepared to facilitate the delivery of instruction, recognize and assess learning, and reinforce effective performance when it occurs. There are several approaches to preparing trainers to perform their duties such as rater error training, frame of reference training, and the mental simulation of instructor activities. For example, frame of reference training increases the awareness and skill of trainers to indentify and assess key competency and performance dimensions when they are displayed in training.
The second step in setting the stage for learning involves preparing the trainee for the acquisition of KSAs. This includes measuring and increasing a trainee’s motivation to learn, self-efficacy, and self-regulatory skills (Colquitt, LePine, and Noe, 2000). It is also important to de-emphasize pre-existing power differences, engage less verbal learners, and display an individualized interest in the development of each employee when training in group settings.
Once trainers and trainees are adequately prepared to engage in learning, the purpose and objectives of training must be stated and explained. This is an opportunity to frame training as both a privilege and a necessity by describing why it is instrumental to securing valued outcomes for individuals and their employers. It is also a time to provide a realistic preview of training and advanced organizers of the instructional experience can help guide this conversation.
The next step in setting the stage for learning involves stating learning and performance standards so that trainees have appropriate benchmarks against which to gauge their development. In addition to setting standards, trainers should discuss how trainees should pursue goals. For example, performance goals are often sufficient when training is concerned with skill automaticity for simple tasks; however, mastery goals, which emphasize learning rather than the demonstration of ability, are typically more useful for facilitating proficiency in complex task domains (see Chapter 9, this volume). Trainees should also be engaged in helping to set their own proximal and distal training goals.
A final step involves providing trainees with learning tips. For example, attentional advice, strategies and preparatory information about stressors can alert trainees to important aspects of instruction. Learners should be encouraged to explore, experiment, and actively construct meaning from training events. For example, errors should be framed as opportunities to reflect and delineate lessons that can be transferred to the workplace (Keith and Freese, 2008). Sometimes, trainees may require instruction on how to learn from their failures (Argyris, 1992).
Deliver the blended learning solution
The second stage in implementing training involves delivering the blended learning solution. There are three mechanisms for delivering content including information presentation, modeling, and practice. Information can be presented via the use of lectures, reading assignments, case studies, and open discussions. The specific content of what is discussed is dictated by the particular KSAs targeted for development but should also include descriptions of effective and ineffective performance, common workplace errors, and tactics for meeting business challenges.
Trainees should be encouraged to actively construct, integrate, and associate various facts rather than be treated as passive recipients of instructional content (Schwartz and Bransford, 1998). For example, couching lessons in contrasting cases comprised of alternative, but equally compelling, explanations for some event or dilemma can be a powerful approach for motivating the active construction and acquisition of knowledge and skill (Fritzsche, Stagl, Burke, and Salas, in press). Similarly, perceptual contrasts, or alternative pictorial depictions, can be useful for helping trainees notice the subtle features of information that can be visualized (Bransford, Franks, Vye, and Sherwood, 1989).
Prior to practice, trainees should be asked to engage in symbolic rehearsals or mental simulations of the processes they intend to enact during training. During practice, trainees should be given ample opportunity to repeatedly engage in the cognitive and behavioral actions targeted for development to the point of overlearning. It is important to note, however, that repeated practice is often not sufficient to develop learning outcomes and may even be counterproductive to skill generalization. Rather, trainers should guide trainees through deliberate practice by requiring repetitions on gradually modified tasks (Ericsson, in press). For example, difficulties, obstacles, and equipment malfunctions can be gradually introduced to ramp training complexity as learners develop competence in navigating routine challenges.
When training content is delivered, instructors, an intelligent learning architecture, or a combination of the two, must assess the progress of learners and deliver timely, accurate, and actionable feedback. For example, evaluative and interpretative feedback can be used to provide trainees with adaptive guidance (Bell and Kozlowski, 2002). While novice learners may need more immediate feedback, over time diagnostic information should be faded and delivered more intermittently to gradually remove the scaffolding inherent to training solutions.
Support transfer and maintenance
Training is often concluded when practice and assessment are complete. This is unfortunate because the post-practice stage provides a window of opportunity to enhance learning transfer and maintenance. For example, after action reviews both debrief and educate. Trainers should empower trainees to drive this dialog by soliciting and reinforcing comments while withholding input for clarifications (Tannenbaum, Smith-Jentsch, and Behson, 1998). Asking trainees to generate explanations for their actions during training is critical to the process.
Once the debriefing session is complete, trainers should offer final guidance to learners. Trainees, in conjunction with their managers and leaders, should be prompted to set proximal and distal goals for applying newly acquired capabilities in the workplace (Taylor, Russ-Eft, and Chan, 2005). It is also important to advise trainees to reflect over their training experiences and to continually refresh their learning to avoid skill decay. For example, peer-to-peer rehearsals, communities of practice, and online discussion forums can each contribute to facilitating the long-run maintenance of learning.
The final step of training implementation involves intervening in the workplace to help ensure transfer. Engaging a trainee’s managers and supervisors to encourage, recognize, and reward the display of newly acquired KSAs can help foster a climate for learning. Steps should also be taken to minimize the delay between training and operational use of new capabilities.
The final phase in designing systematic training involves evaluating whether training was effective, and more importantly, why it was effective (or ineffective) so that required improvements can be made. Unfortunately, many organizations do not evaluate training effectiveness because evaluation can be costly and resource intensive. It often requires specialized expertise and a team of people who can collect and interpret performance data. However, organizations often fail to consider that ineffective training can be far more costly in the long-term, in terms of poor performance, errors, and missed opportunities, than an investment in training evaluation. Therefore, it is imperative that organizations assess the effectiveness of training and use the information gathered as a means to improve training design.
The first step in evaluating training involves determining the purpose of evaluation as well as the sophistication of the consumers of evaluation study findings (Kraiger, 2002). Many different variables can be measured during training evaluation including affective and utility reactions, expectation fulfillment, the ABCs of learning, and performance on a host of teamwork and taskwork processes, just to name a few. For example, we mentioned earlier in our discussion the importance of assessing the type, amount, and structure of newly acquired knowledge. Moreover, the proceduralization, compilation, and automaticity of new skills can also be gauged.
It is also important to think through the likely results from training evaluation studies. For example, in some cases pretraining levels of performance do not increase as a result of training, as assessed during or immediately after practice, and yet workplace performance improves substantially during an entire fiscal year. This result can occur when errors and difficulties are systematically integrated into a blended learning solution (Bjork, in press). Of course, without an infusion of obstacles, trainers can face the obverse problem of maximizing change on the ABCs of learning without fostering subsequent performance transfer, maintenance, and generalization in the workplace. This scenario demonstrates the importance of having quality metrics but also speaks to the criticality of knowing why and when assessment tools should be applied.
A variety of rigorous experimental designs are available to evaluate training and the strengths and weaknesses of a particular approach should be identified and addressed before it is used to evaluate training (Shadish, Cook, and Campbell, 2002). There are also a number of non-experimental designs that are also very useful for evaluation, given that situational constraints preclude the use of formal experimentation (Sackett and Mullen, 1993). One of the key questions to deciding between evaluation designs is whether the purpose of training is to facilitate change or to help trainees achieve some standard of performance. Comprehensive guidance on analytic techniques for conceptualizing and gauging change is available in the former case (Day and Lance, 2004). In the latter case, less rigorous research designs can yield useful information.
In order for the findings of evaluation studies to be meaningful there must be consistency between the level of the focal variables and contextual factors, research design, aggregation rules, analysis, and result interpretation (Kozlowski et al., 2000). Consistency is essential to meaningfully estimate the impact of training on organizational performance as well as to yield useful inputs for return on investment, cost-benefit and net present value calculations.
A success: the aviation experience
Teamwork improves performance in some jobs; in others it is imperative. For example, teamwork in the cockpit is essential - lives depend on it. We know that 60-80% of the accidents or mishaps in aviation are due to human error and a large percentage of those are caused by coordination and other teamwork problems in the cockpit.
Research on team training has developed many instructional methods and techniques to enhance teamwork in complex environments such as the cockpit. These approaches are pervasive in the aviation industry. In fact, both the military aviation community and the commercial airlines implement systematic team training (Weiner, Kanki, and Helmreich, 1993). The Navy has designed and delivered team training for its aviation platforms for many years. For example, engagement simulations, which provided a forum and format for group experiential learning, helped the US Navy increase its superiority in air-to-air combat by a factor of 12 in one year (Chatham, in press, B). This training continues to be refined and scaled to reach more people more often via personal computer-based flight simulators (Brannick, Prince, and Salas, 2005).
Training scientists and learning specialists, in partnership with subject matter experts, developed an approach that systematically helps instructional developers design and deliver Crew Resource Management (CRM) training in the Navy (Salas, Prince, Bowers, Stout, Oser, and Cannon-Bowers, 1999). This method illustrates how to apply the four phases outlined in this chapter. It begins with an identification of operational and mission requirements and the required competencies and performance processes (i.e. needs analysis). Extensive interviews and observations were conducted in order to ensure the required KSAs for coordination were identified. The literature was reviewed and a theory-based framework developed.
In parallel to this process, the scientists, sponsors, users, and industry representatives met on an ongoing basis to discuss organizational procedures and policies that needed to be in place as the methodology evolved. In the end, this proved to be a very valuable dialog - it prepared the Navy for the training. Specifically, it created a learning climate - before the training was implemented, during and after. Once training objectives were derived and validated by SMEs, the methodology called for designing and creating opportunities for practice and feedback, developing measurement tools for feedback, and implementing the training. The methodology ended with suggestions for ensuring that a multi-component evaluation protocol is built into training.
This methodology has been translated into a detailed set of specifications. These are a set of step-by-step instructions that can be used by instructional designers to develop curriculum and supporting materials. The evaluation of communities following this approach suggest crews react better to the instruction, learn more about teamwork, and exhibit more teamwork behaviors in the cockpit as a result of the training (Salas, Fowlkes, Stout, Milanovich, and Prince, 1999).
In sum, this methodology has been implemented and tested in several communities - and it works. It works because the approach uncovers the needed KSAs and performance processes, prepares the organization for the training, relies on theories of learning, and applies sound instructional principles to the design of team training. It works because the training seeks to diagnose and remedy-specific deficiencies. It works because the implementation process sets the right climate for learning and transfer and evaluates its impact. It works because the methodology guides the instructional developer through a systematic process incorporating all the phases outlined here, and utilizes the best information that the science of training can offer.
A failure: training the sales force
Sales at a large telecommunications company were down for the third quarter. Management reviewed several strategies to improve sales and concluded that one solution would be to improve training for the large, dispersed sales force. For the sake of expediency, the training department began using a needs analysis they conducted several years before as a basis to develop enhanced training. Their plan was first to update the original needs analysis and then to develop new training strategies on the basis of what they found. They also began investigating new training technologies as a possible means to reduce training delivery costs. However, management was so intent on doing something quickly that the training department was ultimately pressured into purchasing a generic, off-the-shelf training package by a local vendor. One of the features of the package that appealed to management was that the course could be delivered over the web, saving the time and expense of having the sales force travel to the main office to receive the training. Hence, even though the package was costly to purchase, the company believed that it was a bargain compared to the expense of developing a new package in-house and delivering it in person to the sales force.
Six months after the training had been delivered, sales were still declining. Management turned to the training department for answers. Because no measures of training performance had been designed, the training department had little information upon which to base its diagnosis. For lack of a better idea, members of the training department began questioning the sales force to see if they could determine why the training was not working. Among other things, the sales people reported that the training was slow and boring, and that it did not teach them any new sales techniques. They also complained that, without an instructor, it was impossible to get clarification on the things they did not understand. Moreover, they reported that they believed that sales were off not because they needed training in basic sales techniques, but because so many new products were being introduced that they could not keep up. In fact, several of the sales people requested meetings with design engineers just so they could get updated product information. The training department took these findings back to management and requested that they be allowed to design a new training package, beginning with an updated needs analysis to determine the real training deficiencies.
So how could this company have avoided this costly mistake? Our contention is, had they engaged in a systematic training design and delivery process, they would have provided effective training and not invested in a useless product. For example, a careful needs analysis would have revealed specific performance deficiencies. In addition, a better assessment of the training delivery - especially as it related to trainee motivation - would have indicated that the web-based course may not have been the best choice. Unfortunately, cases like this occur all too frequently, but can easily be avoided if a systematic process for training design and delivery is followed.
CONCLUSION
As recently as the 1990s, the transformative power of training was not appreciated in some circles, as typified by Jack Welch’s statement “We want only A players. Don’t spend time trying to get C’s to be B’s. Move them out early” (Slater, 1998, p. 157). Today, the mandate to gain competitive advantage via the people who make the place has never been stronger; and even Wall Street wonders like GE are aggressively developing talent. In fact, 75% of 400 HR executives across 40 countries view leader development as mission critical (IBM, 2008).
There was a time from World War II until the 1960s when the training literature was voluminous but largely “. . . nonempirical, nontheoretical, poorly written and dull (Campbell, 1971). Today, the science of training has much to contribute to facilitating the systematic development of individuals, teams, and organizations. In this chapter we offered a translation mechanism for stakeholders charged with designing systematic training. To the extent that the theories, principles, guidelines, and best practice specifications presented herein are diligently applied, training solutions will be better positioned to provide meaningful learning experiences.
Aguinis, H. and Kraiger, K. (in press). Benefits of training and development for individuals and teams, organizations, and society. To appear in Annual Review of Psychology.
Anderson, J. R. (1993). Problem solving and learning. American Psychologist, 48, 35-44.
Argyris, C. (1992). On Organizational Learning. Malden, MA: Blackwell.
Bassi, L. J., and Van Buren, M. E. (1998). Leading-edge practices, industry facts and figures, and (at last!) evidence that investment in people pays off in better performance: The 1998 ASTD state of the industry report. Alexandria, VA: ASTD.
Bell, B., and Kozlowski, S. W. (2002). Adaptive guidance: enhancing self-regulation, knowledge and performance in technology-based training. Personnel Psychology, 55, 267-306.
Bjork, R. A. (in press). Structuring the conditions of training to achieve elite performance. To appear in K. A. Ericsson (ed.), The Development of Professional Performance: Approaches to Objective Measurement and Designed Learning Environments. Cambridge University Press.
Brannick, M. T., Prince, C., and Salas, E. (2005). Can PC-based systems enhance teamwork in the cockpit? The International Journal of Aviation Psychology, 15, 173-187.
Bransford, J. D., Franks, J. J., Vye, N. J., and Sherwood, R. D. (1989). New approaches to instruction: Because wisdom can’t be told. In S. Vosniadou and A. Ortony (eds), Similarity and Analogical Reasoning (pp. 470-497). Cambridge: Cambridge University Press.
Burke, S., Stagl, K. C., Salas, E., Pierce, L., and Kendall, D. (2006). Understanding team adaptation: A conceptual analysis and framework. Journal of Applied Psychology, 91, 1189-1207.
Campbell, J. P. (1971). Personnel training and development. Annual Review of Psychology, 22, 565-603.
Campbell, J. P., and Kuncel, N. R. (2002). Individual and team training. In N. Anderson, D. S. Ones, H. K. Sinangil, and C. Viswesvaran (eds), Handbook of Industrial, Work and Organizational Psychology (pp. 272-312). London, England: Sage.
Chatham, R. E. (in press, A). Toward a second training revolution: promise and pitfalls of digital exponential training. To appear in K. A. Ericsson (ed.), The Development of Professional Performance: Approaches to Objective Measurement and Designed Learning Environments. Cambridge University Press.
Chatham, R. E. (in press, B). The 20th century revolution in military training. To appear in K. A. Ericsson (ed.), The Development of Professional Performance: Approaches to Objective Measurement and Designed Learning Environments. Cambridge University Press.
Colquitt, J. A., LePine, J. A., and Noe, R. A. (2000). Toward an integrative theory of training motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied Psychology, 85, 678-707.
Cronbach, L. J., and Snow, R. E. (1977). Aptitudes and Instructional Methods: A Handbook for Research on Interactions. New York: Irvington.
Day, D., and Lance, C. (2004). Understanding the development of leadership complexity through latent growth modeling. In D. Day, S. J. Zaccaro, and S. M. Halpin (eds), Leader Development for Transforming Organizations. Mahwah, NJ: Erlbaum Associates.
Ericsson, K. A. (in press). Enhancing the development of professional performance: Implications from the study of deliberate practice. To appear in K. A. Ericsson (ed.), The Development of Professional Performance: Approaches to Objective Measurement and Designed Learning Environments. Cambridge University Press.
Fowlkes, J. E., Salas, E., Baker, D. P., Cannon-Bowers, J. A., and Stout, R. J. (2000). The utility of event-based knowledge elicitation. Human Factors, 42, 24-35.
Fritzsche, B. A., Stagl, K. C., Salas, E., and Burke, C. S. (2006). Enhancing the design, delivery, and evaluation of scenario-based training: Can situational judgment tests contribute? In J. A. Weekley and R. E. Ployhart (eds), Situational Judgment Tests. Mahwah, NJ: Erlbaum.
Fritzsche, B. A., Stagl, K. C., Burke, C. S., and Salas, E. (in press). Developing team adaptability and adaptive team performance: The benefits of active learning. Human Performance.
Gagné, R. M., Briggs, L. J., and Wager, W. W. (1988). Principles of Instructional Design (3rd edition). New York. Holt, Rinehart and Winston.
Georgenson, D. L. (1982). The problem of transfer calls for partnership. Training and Development Journal, 36, 75-78.
Goldstein, I. L., and Ford, J. K. (2002). Training in Organizations. (4th edition). Belmont, CA: Wadsworth Thompson Learning.
Hackman, J. R. (1999). Thinking differently about context. In R. Wageman (ed.), Research on Managing Groups and Teams: Groups in Context. Stamford, Connecticut: JAI Press.
IBM (2008). Unlocking the DNA of the Adaptable Workforce, the IBM Global Human Capital Study. Milwaukee, WI: IBM.
Keith, N., and Freese, M. (2008). Effectiveness of error management training: A meta-analysis. Journal of Applied Psychology, 93, 59-69.
Klein, C., Stagl, K. C., Salas, E., Burke, C. S., DiazGranados, D., Goodwin, G. F., and Halpin, S. M. (2007). A meta-analytic examination of team development interventions. Poster presentation conducted at the Best Paper Proceedings of the 22nd annual conference of the Society for Industrial and Organizational Psychology. Manhattan, New York.
Klein, C., Stagl, K. C., Salas, E., Parker, C., and Van Eynde, D. (2007). Returning to flight: Simulation-based training for the US National Aeronautics and Space Administration’s Mission Management Team. International Journal of Training and Development, 11, 132-138.
Kozlowski, S. W. J., Brown, K. G., Weissbein, D. A., Cannon-Bowers, J. A., and Salas, E. (2000). A multilevel perspective on training effectiveness: Enhancing horizontal and vertical transfer. In K. J. Klein and S. W. J. Kozlowski (eds), Multilevel Theory, Research, and Methods in Organizations (pp.157-210). San Francisco: Jossey-Bass.
Kozlowski, S. W. J., Gully, S. M., Brown, K., Salas, E., Smith, E., and Nason, E. (2001). Effects of training goals and goal orientation traits on multidimensional training outcomes and performance adaptability. Organizational Behavior and Human Decision Processes, 85, 1-31.
Kozlowski, S. W. J., and Salas, E. (1997). An organizational system approach for the implementation and transfer of training . In J. K. Ford and Associates (eds), Improving Training Effectiveness in Work Organizations (pp. 247-290). Hillsdale, NJ: LEA.
Kraiger, K. (2002). Decision-based evaluation. In K. Kraiger (ed.), Creating, Implementing and Maintaining Effective Training and Development: State-of-the art Lessons for Practice (pp. 331-375). San Francisco: Jossey-Bass.
Kraiger, K., Ford, J. K., and Salas, E. (1993). Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311-328.
Lajoie, S. P. (in press). Developing professional expertise with a cognitive apprenticeship model: Examples from avionics and medicine. To appear in K. A. Ericsson (ed.), The Development of Professional Performance: Approaches to Objective Measurement and Designed Learning Environments. Cambridge University Press.
Manpower (2007). Talent shortage study: 2007 global results. http://files.shareholder.com/downloads/MAN.
Marks, M. A., Mathieu, J. E., and Zaccaro, S. J. (2001). A temporally based framework and taxonomy of team process. Academy of Management Review, 26, 356-376.
Mathieu, J. E., Maynard, M. T., Taylor, S. R., Gilson, L. L., and Ruddy, T. M. (2007). An examination of the effects of organizational district and team contexts on team processes and performance: A meso-mediational model. Journal of Organizational Behavior, 28, 891-910.
McKinney, E. H., and Davis, K. J. (2003). Effects of deliberate practice on crisis decision performance. Human Factors, 45, 436-444.
Nunnally, J., and Bernstein, I. (1994). Psychometric Theory. New York: McGraw Hill.
Paradise, A. (2007). State of the Industry: ASTD’s Annual Review of Trends in Workplace Learning and Performance. Alexandria, VA: ASTD.
Sackett, P. R., and Mullen, E. J. (1993). Beyond formal experimental design: Towards an expanded view of the training evaluation process. Personnel Psychology, 46, 613-627.
Salas, E., and Cannon-Bowers, J. A. (2000). Design training systematically. In E. A. Locke (ed.), The Blackwell Handbook of Principles of Organizational Behavior (pp. 43-59). Malden, MA: Blackwell.
Salas, E., Fowlkes, J. E., Stout, R. J., Milanovich, D. M., and Prince, C. (1999). Does CRM training improve teamwork skills in the cockpit? Two evaluation studies. Human Factors, 41, 327-343.
Salas, E., Prince, C., Bowers, C. A., Stout, R. J., Oser, R. L., and Cannon-Bowers, J. A. (1999). A methodology for enhancing crew resource management training. Human Factors, 41, 61-72.
Salas, E., Stagl, K. C., Burke, C. S., and Goodwin, G. F. (2007). Fostering team effectiveness in organizations: Toward an integrative theoretical framework of team performance. In J. W. Shuart, W. Spaulding, and J. Poland, (eds), Modeling Complex Systems: Motivation, Cognition, and Social Processes (pp. 185-243). Lincoln, NE: Nebraska Press.
Schwartz, D. L., and Bransford, J. D. (1998). A time for telling. Cognition and Instruction, 16, 475-522.
Shadish, W. R., Cook, T. D., and Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin Company.
Slater, R. (1998). Jack Welch and the GE Way: Management Insights and Leadership Secrets of the Legendary CEO. Columbus, OH: McGraw-Hill.
Smith-Jentsch, K. A., Zeisig, R. L., Acton, B., and McPherson, J. A. (1998). Team dimensional training: A strategy for guided team self-correction. In J. A. Cannon-Bowers and E. Salas (eds), Making Decisions Under Stress: Implications for Individual and Team Training. Washington: APA Press.
Stagl, K. C., Klein, C., Rosopa, P. J., DiazGranados, D., Salas, E., and Burke, C. S. (unpublished manuscript). The effects of cross training teams: A meta-analytic path model.
Stagl, K. C., Salas, E., and Day, D. V. (2007). Assessing team learning outcomes: Improving team learning and performance. In V. I. Sessa and M. London (eds), Work Group Learning: Understanding, Assessing, and Improving How Groups Learn in Organizations (pp. 369-392). New York, NY: Taylor and Francis.
Stout, R. J., Salas, E., and Fowlkes, J. E. (1997). Enhancing teamwork in complex environments through team training. Group Dynamics: Theory, Research and Practice, 1, 169-182.
Swanson, R. A. (2001). Assessing the Financial Benefits of Human Resource Development. Cambridge, MA: Perseus.
Taylor, P. J., Russ-Eft, D. F., and Chan, D. W. L. (2005). A meta-analytic review of behavior modeling training. Journal of Applied Psychology, 90, 692-709.
Tannenbaum, S. I., Smith-Jentsch, K. A., and Behson, S. J. (1998). Training team leaders to facilitate team learning and performance. In J. A. Cannon-Bowers and E. Salas (eds), Making Decisions Under Stress: Implications for Individual and Team Training. Washington: APA.
Weiner, E. L., Kanki, B. J., and Helmreich, R. L. (1993). Cockpit Resource Management. San Francisco, CA: Academic.
Zachary, W., Bilazarian, P., Burns, J., and Cannon-Bowers, J. A. (1997). Advanced embedded training concepts for shipboard systems. Proceedings of the 19th Annual Interservice/ Industry Training, Simulation and Education Conference (pp. 670-679). Orlando, FL: National Training Systems Association.
Training planning
A large computer manufacturer experienced declining revenues during FY08. After careful analysis, the executive team determined that global competitors were squeezing their market share by offering a suite of comprehensive technical support services for their computers. A mandate was issued from the c-suite to recruit and develop the human capital required to offer similar services. As the chief learning officer it is your charge to ensure your organization has proficient people in place by the close of the second quarter 2009. To accomplish your objectives, develop a plan for what kinds of training will be conducted, why training will be conducted, and how proposed solutions will contribute to meeting the immediate, mid-range, and long-run needs of your organization. Be precise in specifying what internal and external resources and support are required to design and implement the process in a timely manner.
Training evaluation
A management consultancy was contracted by a medical supply firm to evaluate an in-house training solution for medical representatives. Discouraged by the findings that medical reps were apparently not learning during training (as measured by post-training reactions and declarative knowledge), yet seemed to be performing very effectively in the workplace, the consulting house turned to you as a subject matter expert to review their training evaluation study results. Describe the steps you will take to help the consultants uncover the true impact of the training solution. Think through training design, criterion identification, change measurement, and data analysis issues. State your recommendations for conducting a stronger evaluation.