Happy Ear (a pseudonym) is a leading hearing aid organization headquartered in Minnesota, operating over 1,500 locations nationwide. It is a subsidiary of Audionix (a pseudonym), the global leader in hearing aid retail. Since 1948, Happy Ear has focused on reconnecting individuals with their loved ones through innovative hearing healthcare. Its Happy Ear Advantage includes digital hearing technology, lifetime maintenance, and a highly trained workforce. The Happy Ear Foundation further supports underserved communities by providing hearing care to those in need.
Happy Ear invests in training its employees, including annual programs and a licensure preparation pathway. The first step is the Happy Ear Boot Camp, a three-week virtual training program delivered through a Learning Management System (LMS). It introduces foundational skills through interactive modules, live webinars, and textbook readings. Topics include audiometry, hearing instruments, infection control, counseling, and auditory anatomy, preparing participants for a subsequent 12-week program and state licensure exam.
Problem/Gap
Happy Ear identified a critical gap in their training efficiency and outcomes. The primary challenge lies in the inefficiencies of their current data collection and tracking methods during the Boot Camp program. The existing processes don't sufficiently capture detailed participant engagement metrics or long-term performance trends, which are vital in tailoring the training to better prepare participants for the subsequent licensure exams. Moreover, inconsistent participant engagement across modules further complicates achieving optimal program effectiveness.
The Happy Ear Boot Camp is a critical three-week virtual training program designed to equip new hires with the foundational knowledge and skills necessary for state licensure in hearing aid dispensing. The boot camp has three types of stakeholders: upstream, downstream stakeholders, direct impactees, and downstream indirect impactees:
Upstream Stakeholders
Emily and Joyce, as training managers, oversee the Happy Ear Boot Camp, ensuring it aligns with organizational goals. They focus on program implementation, content management, and participant support. Both are keenly interested in leveraging evaluation results to refine engagement and completion rates.
Direct Impactees
The primary beneficiaries of the Boot Camp are its participants—new hires and employees seeking licensure as hearing aid dispensers. The program is crucial for their advancement, preparing them for further training and licensure exams.
Indirect Impactees
The improved skills from the Boot Camp have a ripple effect, benefiting patients, franchise owners, and employees. Enhanced care quality and increased service efficiency boost patient satisfaction and clinic success, extending to underserved rural areas and promoting a positive work culture.
Emily and Joyce, as key stakeholders and training managers, requested an evaluation to assess the program's effectiveness in preparing participants for state licensure. Our team completed this evaluation project as a course project for OPWL 530 at Boise State University (Hunt et al.,2024). The evaluation aimed to assess the effectiveness of the Happy Ear Boot Camp program in preparing participants for state licensure by identifying areas for program improvement in data collection and tracking effectiveness, as well as participant engagement.
This was a formative evaluation used to identify areas for improvement in the Happy Ear Boot Camp. Its purpose was to enhance program content, delivery, and participant outcomes by focusing on two dimensions: Data Collection/Tracking Effectiveness and Participant Engagement.
The evaluation aimed to determine how well data collection supports training improvements and to identify engagement patterns that reveal opportunities for enhancing content delivery. The team prioritized data collection and tracking effectiveness as critical for improving the program's impact on licensure readiness. The team ranked participant engagement as important and evaluated it to understand its influence on learning outcomes.
Data collection methods included an extant data review, semi-structured interviews, and virtual observations. The extant data focused on attendance, assignments, and completion rates from the January, February, and April 2024 cohorts. Interviews with program managers Emily and Joyce provided qualitative insights into data collection and engagement challenges. A virtual observation of a live webinar assessed participant engagement using a standardized checklist.
Although surveys and LMS data were unavailable, the evaluation team adapted by triangulating alternative data sources to ensure validity. Data collection occurred between October 12 and November 15, 2024, with analysis concluding by November 21. These efforts provided actionable insights to refine the Boot Camp's effectiveness in preparing participants for licensure.
Dimension One: Data Collection & Tracking Effectiveness
This dimension evaluated the utility and comprehensiveness of the data collected during the Boot Camp, aiming to improve content delivery and participants' readiness for the state licensure exam. The evaluation specifically investigated any existing data gaps that could hinder these improvements. The importance weighting for this dimension was assigned as Critical (2), underscoring its significance in assessing the overall effectiveness of the program.
To investigate this dimension, the evaluation team employed the following data collection methods:
Extant Data Review (Spreadsheet Data Review): Attendance records, completion rates, quiz scores, and performance on key assignments from the January, February, and April 2024 cohorts were analyzed to identify trends and patterns in participant performance.
Stakeholder Interviews: Semi-structured interviews were conducted via Zoom with key training managers to gather qualitative insights into the current data collection processes. This method helped to identify gaps and explore areas for improvement within the data tracking framework.
Web-Based Surveys for Instructors: Surveys evaluating the effectiveness of data collection for instructors were not applicable due to a lack of responses.
Dimension Two: Participant Engagement
This dimension examined how actively and consistently participants engaged with various assignments, quizzes, webinars, and activities throughout the three-week Boot Camp. The objective was to identify engagement patterns and evaluate opportunities for enhancing content delivery and learner support. This dimension was assigned an importance weighting of Important (1), indicating its relevance to the overall program effectiveness.
To assess participant engagement, the following data collection methods were utilized:
Observation: Virtual observations of participant engagement during live webinars were conducted, with notes evaluated using a standardized checklist. Specific metrics included responses to Poll Everywhere activities, chat participation, and verbal responses during discussions to gauge real-time interaction.
Extant Data Review (Spreadsheet Data): Analysis of homework submission records, quiz scores, webinar interactions, and scores on key assignments was performed to identify trends in overall participant engagement.
Web-based Surveys for Students: Surveys intended to measure participant engagement in the Boot Camp program were deemed inapplicable due to insufficient responses
The evaluation team analyzed data sources against independent rubrics organized into three categories: Excellent, Fair, and Poor. These independent results were then categorized using triangulation rubrics to synthesize findings from multiple data sources for each dimension. The triangulation rubric results indicate the quality of each dimension and provide insight into the program’s strengths and areas for improvement.
Dimension One Overall Rating: Poor Quality
The absence of survey data significantly limited the comprehensive evaluation of data collection and tracking effectiveness. While attendance and completion tracking systems were somewhat functional in the narrow spreadsheet data analyzed, the inaccessibility of LMS data and inconsistent stakeholder feedback revealed critical gaps. Without survey data to validate these findings and one data source rated as poor, dimension two cannot be rated higher than poor quality.
Dimension Two Overall Rating: Fair Quality
Synthesized results from dimension two instruments, the overall dimensional rating for participant engagement is Fair. Engagement was generally strong in the narrow sample of data provided by the client, lack of detailed LMS data to show specific engagement within modules and lessons was a highly limiting factor in evaluating dimension two. The overall results could be skewed due to lack of detailed LMS data and survey data to capture participant perspectives.
The evaluation of the Happy Ear Boot Camp provided valuable insights into the program's effectiveness in preparing participants for state licensure. The assessment revealed critical gaps in both data collection and participant engagement. Specifically, the evaluation identified that the current data collection methods were insufficient, leading to a poor overall rating for tracking effectiveness. The lack of survey responses from instructors hindered the ability to comprehensively analyze the effectiveness of the data collection strategies.
In contrast, participant engagement was rated as fair, indicating strong verbal participation during live sessions, yet highlighting inconsistencies across different modules due to the lack of comprehensive engagement data from the Learning Management System (LMS).
The findings suggest an urgent need for improving data collection processes, particularly the standardization of methods and increased accessibility of engagement metrics. Enhanced participant engagement strategies, including more interactive elements in the Boot Camp, are also recommended to foster a deeper connection between participants and the training content.
Access the full final evaluation report by clicking this link
Hunt, H., Archer, L., Nguyễn , D., & Martinez, A. (2024). Optimizing the Happy Ear Boot Camp: A Focus on Data and Engagement (H. Hunt, L. Archer, D. Nguyễn, & A. Amanda, Eds.) [Review of Optimizing the Happy Ear Boot Camp: A Focus on Data and Engagement ]. https://docs.google.com/document/d/1H7tIu8W8914cgyOhATWF20GCBdJ5Dlwk/r/edit/edit
Chyung, S. Y. (2019). 10-step evaluation for training and performance improvement. Thousand Oaks, CA: Sage.
Kirkpatrick, D. (1996). Evaluating training programs: The four levels. San Francisco: Berrett-Koehler.