In the spring semester of 2023, I conducted an evaluation of the Washington Headquarters Services’ (WHS) Aspiring Leader Program (ALP) with three of my Organizational Performance and Workplace Learning (OPWL) peers (Victoria Couch, Nolan Lovett, and Kaleigh Wood). This case summary provides a high-level overview of the systematic approach used to evaluate the ALP and the factors considered to identify areas of improvement and recommendations for the overall content and marketability.
The full 81-page report can be reviewed by expanding the document on the right.
The WHS is the shared services provider for the Office of the Secretary of Defense and the Department of Defense. Our evaluation focused on WHS’s ALP, a leadership development initiative established in 2015 for employees within the WHS and the WHS-serviced population; specifically, those between the federal general schedule grades 11 and 13, or equivalent, who have demonstrated leadership potential.
The ALP is a 12-month program with the objective of creating a highly agile and proficient workforce, capable of assuming more significant leadership roles in the future. During the program, participants attend various training, receive coaching, work on a Corporate Impact Project, while also completing assigned coursework.
Stakeholders identified for this evaluation played a role in the design, development, implementation, and maintenance of the program. The clients (noted below with an *) were directly involved with this evaluation.
WHS Director
WHS Human Resource Director
Director of IOD
Program Manager*
Deputy Program Manager*
In collaboration with the client, the evaluation team crafted a purpose statement to succinctly articulate the objective of this evaluation:
Conduct a formative evaluation of the ALP and identify ways the program designers can improve the program to make it better for its participants. The program designers will use the evaluation findings to implement changes that provide excellent career and leadership development, thereby creating better leaders within WHS that can take on roles with greater responsibility as well as improving the program for future participants.
Utilizing Chyung’s (2019) 10-step evaluation process, the evaluation team conducted a formative evaluation of the ALP. By following this systematic evaluation process, the team progressed through three main phases from Identification to Planning, and finally the Implementation Phase. The following sub-sections will provide an overview of each phase of the evaluation process, with a focus on how the evaluation team applied the 10 steps to achieve the key deliverables.
Figure 1
Evaluation Procedure
The evaluation started with identification of the evaluand which came from a formal submission to the OPWL program from the ALP Deputy Program Manager. Once the evaluand was agreed upon, the team executed the following actions:
Collaborated with the client to identify the stakeholders (Upstream) and impactees (Indirect & Direct) which enabled the team to gain a clearer understanding of the program, identify risks and assess feasibility, and successfully scope the project.
Gathered information and determined that a formative evaluation was the most appropriate method to find out what was working for the participants and where there are areas the program can be improved.
Determined the evaluation would be a goal-based evaluation with a goal-free approach. The team to assessed the quality of the program against the program goals, and evaluated the actual outcomes to identify unexpected results from the ALP.
Identified evaluation results to be implemented by the program designers for improving the program for future participants.
The evaluation team worked with the client to create a Program Logic Model (PLM) that outlined the ALP means and intended outcomes. The PLM was also used to aid in determining the dimensions of the ALP that would be evaluated for this effort. As a result, the team identified two main dimensions of the evaluation:
Curriculum (Very Important): Assessing the quality, relevance, and effectiveness of the ALP curriculum.
Admission (Important): Examining the marketing strategies and application processes to ensure the best candidates are recruited.
Specifically, the team was seeking to answer the following questions:
How well are the seminars/courses/assignments designed when compared to Keller’s Attention-Relevance-Confidence-Satisfaction (ARCS) model?
How satisfied and motivated are the students with the seminars/courses/assignments in terms of Keller’s ARCS model?
How was the program participants' experience in terms of how they were informed about the program?
How was the program participants' experience in terms of applying to the program?
As a result of the work in this phase, the team developed an Evaluation Proposal that included two main components:
The program logic model (PLM) served as the team’s road map (W. K. Kellogg Foundation, 2004).
The data collection plan provides an overview how data was collected and the rationale for each method.
For the Implementation Phase, the team used multiple types of data collection to ensure the strengths and weaknesses of each method were factored into the evaluation. The primary sources of data were surveys with current and previous students, and interviews with ALP faculty.
Survey: Using Keller’s (2009) ARCS Model (Attention, Relevance, Confidence, Satisfaction) as a guide, the evaluation team designed an online survey utilizing a series of 5-point Likert scale questions and open-ended questions. This enabled the collection of both qualitative and quantitative data about the program and ways in which it may be improved.
Interview: Semi-structured, confidential interviews were conducted with ALP faculty. For these interviews, the evaluation team developed a tool with 16 questions designed to assess the program's quality, admission process, and marketing efforts.
Table 1 summarizes the data collection and sources for each dimension.
Table 1
Data Collection Overview
The evaluation team used the data collected to evaluate individual dimensions based on a four-level rubric to assign values from Superior, Acceptable, Improvement Needed, and Unacceptable. Once the individual dimensions were assigned values, the team used a triangulation rubric to determine the overall quality of each dimension. This resulted in both dimensions receiving an Acceptable score, summarized in Table 2.
Table 2
Dimension Scores
The survey/interview data and the results from the rubrics were further analyzed by the evaluation team to determine where there were areas of improvement so the team could make actionable recommendations for the client.
The team developed an 81-page Formative Evaluation Report which included evaluation findings and recommendations. The report was formally submitted to the client at the end of April.
As a result of the evaluation, both dimensions of the ALP were determined to be Acceptable. Although both dimensions received an Acceptable rating, the evaluation team identified areas of improvement and provided the client with four recommendations. These recommendations will assist the ALP staff improve the program as it evolves and ensure participants are supported throughout their experience. The recommendations included improvements in project effectiveness, data collection, marketing, and application process.
The evaluation covered a 4-month period. The team was relatively inexperienced with formal evaluations and faced a significant learning curve. However, we successfully navigated the challenges of access to a small sample size, time constraints, and limited historical extant data by being adaptable, providing the client and instructor regular updates, and being transparent when issues arose. For example, we had intended to use program documents, instructional materials, and application data as part of the data collection and analysis, however, during the initial phase we determined this information would not provide enough data to draw conclusions. Therefore, we communicated the concerns with the client and made the necessary changes, documenting the limitation in the final report.
In addition to successfully completing the project and receiving positive feedback, our team learned how make data driven decisions, follow a systematic approach, uphold industry ethics, and provide the client with actionable recommendations. As our team reflected, we found there were several areas of improvement that we could have made to improve our evaluation. These areas of improvement included - developing a more detailed evaluation plan, expanding our analysis of the organization’s strategies, and expanding our research of other extant data. The evaluation team’s accomplishment of the OPWL learning goals and areas of improvement can be reviewed here.
Chyung, S. Y. (2019). 10-step evaluation for training and performance improvement. Thousand Oaks, CA: Sage.
Keller, J. M. (2009). Motivational design for learning and performance: The ARCS model approach, Springer.
WK Kellogg Foundation. (2004). WK Kellogg Foundation logic model development guide. WK Kellogg Foundation. https://wkkf.issuelab.org/resource/logic-model-development-guide.html