Sarah Poe, Tonya Barnes, Andrea Cordovez, and Sarah Ullmann
OPWL 530
Evaluation
Dr. Yonnie Chyung
The Operational Technology Assurance (OTA) 301 training course represents the culmination of a comprehensive training program developed by XYZ Inc. (a pseudonym) to address a critical need within the Nuclear Security Enterprise (NSE). XYZ Inc. delivers the OTA course and evaluates the impact of what participants learned in the course. The Evaluation Team took on a project to evaluate how effective the evaluation of the OTA course is.
The focus or subject of an evaluation is called an evaluand. The evaluand of this project is the Level 3 evaluation of the OTA 301 course developed by XYZ Inc. Level 3 refers to the behavior change level of Kirkpatrick’s Four Levels of Evaluation model (Kirkpatrick, 2016). Mr. K is an employee of XYZ Inc., an Instructional Designer of the 301 course, and the client for this project.
OTA 301 is a mastery course that aims to teach the OTA process and allows for the OTA Guidebook to be put into practice before performing an assessment in the field. The Level 3 evaluation of the OTA 301 course is still in the pilot phase. It aims to determine if learning has transferred to use in the real world and to collect data on the effectiveness of the course in preparing participants to successfully apply what they’ve learned. Results of this project will be used to revise the current strategy for and instruments used in conducting the Level 3 evaluation, so that the stakeholders may accurately demonstrate the efficacy of the course, identify areas to improve the course, and continue to receive support and funding for the course.
Upstream stakeholders include the Instructional Designers, Project Managers, and program funders. Direct impactees include OTA 301 course participants and staff members. Indirect impacts are far reaching due to XYZ Inc.’s role in national security.
This back-end, formative evaluation of the Level 3 evaluation produces information that can be used by XYZ Inc. to make improvements to the course design that would impact the participant perceptions of the course and their learning outcomes. The evaluation is goal-based, as the OTA course has objectives related to participants' behavior change due to learning, driving what is measured in the Level 3 evaluation. Currently, the Level 3 evaluation program consists of a self-administered survey designed to capture the participants' experiences and their subsequent application of OTA concepts in their jobs.
The evaluation team selected two evaluation dimensions by assessing needs regarding the Level 3 evaluation program and relevant frameworks and standards associated with the Kirkpatrick model. An expert in the field, Dr. Chyung, was consulted and a program logic model was utilized. The dimensions were weighed for importance based on the goals of the evaluation. Dr. Chyung’s (2019) 10-step evaluation model was followed from beginning to end of the project.
Data was collected from various sources available and instruments with rubrics for evaluating the data were created. The data was then analyzed and the team drew conclusions based on the findings. This final report was prepared by all four team members.
The OTA course primarily focuses on hands-on training using systems that are currently operating in the field. The course is based on the OTA handbook and focuses on getting learners to apply principles from the handbook consistently. The current evaluation that the instructional designer is using is Kirkpatrick’s Level 3 evaluation. The aim of the evaluation is to understand the impact and effectiveness of the course and determine if participants are actually had behavior change due to what they learned.
The Level 3 Evaluation program seeks to provide valuable insights into the long-term outcomes and benefits of the OTA 301 training course.
Through communication with the client, the evaluation team helped develop a program logic model for the OTA 301 training course. Table 2 is the program logic model, outlining the means and results of the program.
Resources: Facilities, tools, materials, data, personnel, etc. that the program will use
Activities: Process that the program will execute
Outputs: Products generated immediately following the program
Outcomes: Expected changes from the participants as a result of the program
Impact: Intended and unintended changes
The results of the evaluation came from the data that was collected and analyzed using the evaluation instruments. The dimensions that were evaluated were selected to give a thorough review of the evaluand.
The first dimension was data collection. It focused on evaluating the tools and processes the client uses to conduct the Level 3 evaluation to determine if the tools were appropriately constructed to evaluate Level 3 results, and if the evaluation program was following industry best practices for data collection process and ethics. Dimension 1 was weighted as “Very Important” and scored “Needs Improvement.”
The second dimension was environmental factors. It focused on the environmental factors that may or may not be influencing the successful collection of level 3 data. It aimed to analyze whether or not external environmental factors are supporting or hindering XYZ Inc.’s evaluation of level 3 results of the OTA 301 training program. Dimension 2 was weighted as “Important” and scored “Adequate.”
The feasibility of the project, risks, and limitations were all identified, along with consideration of the ethics of each evaluation step. While acknowledging the limitations in the evaluation project, the critical importance of contextual and environmental factors was recognized. This understanding shaped the trajectory of the evaluation project. The team was unable to collect any data through the environmental survey, rendering that instrument unusable. However, this revealed the presence of significant environmental barriers that the client needs to be aware of and willing to circumvent when conducting similar projects in the future.
Despite these constraints, the lack of data and variety of data sources were successfully mitigated by adhering to evidence-based practices. The team examined peer-reviewed research to benchmark evaluation best practices, contrasted existing data to rubrics, and utilized expert feedback from the client and professor to meet the specific client’s and organization’s goals.
The goals of the client and the organization were considered with the application of best practices throughout the 10-step evaluation process. As a result, the report is professional, ethical, and as comprehensive as possible.
The evaluation of the OTA 301 Level 3 evaluation program provided a comprehensive overview of the current state of the program, revealing insights into data collection methods and environmental factors. The findings from both dimensions shed light on areas of success and improvement within the Level 3 evaluation.
Based on project findings, the evaluation team recommend the following to improve the Level 3 evaluation program efforts of the Operational Technology Assurance 301 Training course:
Enhance Data Collection Methods:
Required Questions: Augment the self-administered survey with more required questions to deepen insights into participant experiences and knowledge application.
Timing Adjustment: Send out the survey 1-3 months after the course, allowing participants sufficient time to apply OTA principles in their roles before assessment.
Diversify Data Collection Methods:
On-the-Job Observations: Incorporate on-the-job observations to complement survey data, providing a more holistic understanding of participants' application of OTA principles.
Virtual Simulations: Due to restrictions on facility access, explore virtual simulations as an alternative data collection method, offering realistic scenarios for assessment.
Expand Feedback Loops:
Participant Feedback: Establish additional feedback loops from participants through interviews or focus group discussions to gain qualitative insights into their experiences.
Peer and Supervisor Feedback: Solicit input from peers, direct reports and supervisors to capture a comprehensive view of participants' performance and behavior change.
In conclusion, the Level 3 evaluation program holds great potential, while needing improvement. With the recommendations followed, the strategy and instruments of the Level 3 evaluation of the OTA 301 course could be refined, enabling the client to effectively showcase the course's efficacy, pinpoint areas for enhancement, and maintain ongoing support and funding.
Chyung, S. Y. (2019). 10-step evaluation for training and performance improvement. Thousand Oaks, CA: Sage.
Kirkpatrick, D. (1996). Evaluating training programs: The four levels. San Francisco: Berrett-Koehler.
Kirkpatrick, J. D., Kirkpatrick, W. K. (2016). Kirkpatrick's Four Levels of Training Evaluation. United States: Association for Talent Development.
Appendix A. Data Collection Methods
Table A-1. Data Collection Instruments
Table A-2. Data Collection Methods
Appendix B. Rubrics, Instruments and Summaries
Table B-1, Dimensional Rubrics
Table B-2. SWOT Analysis
Exhibit B-1. Completed Checklist of OTA 301 Survey
Exhibit B-2. 1.1/2.1 Interview Summary
Exhibit B-3. Completed Level 3 Checklist
Exhibit B-4. Instrument 2.1 “Environmental Factors Survey”
Appendix C. Informed Consent