I have the privilege of working at a school accredited by Canadian Accredited Independent Schools (CAIS). This national association of close to 100 schools represents the top independent schools in the country and to be formally accredited involves a rigorous process. As CAIS writes:
"In Canada, education is provincially mandated, and CAIS Schools are required to meet Ministry requirements. However, CAIS schools provide more. Our National Standards and Procedures are congruent with the internationally accepted criteria and model core standards adopted by the NAIS International Commission on Accreditation and updated annually based on current research and exceptional practices." (source)
In brief, the process of accreditation takes the form of four components (see CAIS accreditation page for more information and the source of the quotes used below):
1. Internal Evaluation
The school seeking to be accredited or reaccredited drafts an extensive internal report on the its "strengths, weaknesses, future strategic plans, and examining how well its program fulfils its mission." The finished report can run to hundreds of pages and takes most schools over a year to complete. [a component of this will be the subject of this Program Evaluation Design task]
2. On-Site Review
A Visiting Committee of leaders from other CAIS schools then visits the school seeking accreditation/reaccreditation with the objective of validating "the school’s Internal Evaluation and challenge them to reflect on where there are opportunities for growth".
3. Visiting Committee Report
The Visiting Committee then produces an extensive report containing "an overview of where the school stands in terms of each Effective Practice and offers commendations (what the school is doing exceptionally well), suggestions (best professional advice) and recommendations (areas of growth that require action by CAIS)".
4. Response Reports
The school seeking accreditation/reaccreditation will then produce a response to the Visiting Committee's report, articulating how the School will respond to recommendations made.
I will be looking at developing a Program Evaluation Design for the Internal Evaluation stage of my school's CAIS accreditation process. Each school in good standing goes through the reaccreditation process once every 7 years. Our school's internal evaluation is to be completed before the 2020-2021 academic year.
Creating an internal evaluation is a significant undertaking, as the school needs to provide meaningful and insightful feedback on each of 12 standards:
In order to derive the most benefit from the Internal Evaluation process, schools should create a strategy that solicits feedback on school programming from as many constituencies as possible, including faculty, staff, students, parents, alum, and members of the governance team (board of directors, trustees, etc).
For our CAIS accreditation, we are looking at engaging over 200 employees (faculty and staff), 580 students, hundreds of parents, board members and thousands of alumni.
The timeline for this process is 16 months:
My colleague and I have been charged with managing the internal review process.
In brief, the purpose of the exercise is to evaluate how well the School is meeting the 12 CAIS accreditation standards and what are the School's strengths and areas for improvement in light of these standards.
At this stage, the specific evaluation questions that I have are as follows:
It is my hope that answers to these questions will be forthcoming in the next several weeks.
For the purpose of the exercise, I will hone in on one specific academic program that is part of the larger evaluation. Specifically, the program that will developed from this step on will be related to the teaching of learning skills in Grades 9 to 12:
The evaluation method of choice for this project is a combination of a process approach and an impact approach. The process theory would be applied in first 12 months of the program to ascertain if teachers understand and can articulate the effective teaching practices outlined in the program. This information would be collected in departmental discussions and reported back. If it becomes apparent through the process evaluation that there are gaps in understanding, further workshops may be arranged to help clarify.
This process evaluation would be a good fit for the program as the faculty and administration is nimble enough that we could correct course midstream if necessary. Indeed, one of the great things about leading change in a K-12 school is that you can always reset, restate, and re-invigorate a program come the start of a new academic year.
After year 2 and 3, impact evaluations would be undertaken to see if the intermediate and long terms goals of the project, as stated above, are being met. This data would be obtained via small group discussions, large groups discussions, and surveys of teachers, students, and parents.
An impact evaluation would be a good fit for the mature stages of the program to see if we really were able to enact change. Indeed, the school may want to follow up with graduates after 2 or 4 years of university to quantify the impact of the learning skills program.
For this particular program, assessing impact is of greatest importance, as the interventions are crafted around techniques to increase student achievement. The impact will need to be assessed in both the short terms (mid-way through year 1) and long-term (end of year 2 and end of year 3).
Data will be collected at the beginning of the program, at the midpoint, and in the long-term. A significant focus of the program evaluation design will be on usability, as articulated by Patton (1997), as quoted in Saunders, M. (2012):
“Utilization-focussed evaluation emphasises that what happens from the very beginning of a study will determine it eventual impact long before a final report is produced. [Utilization-focussed evaluation] concerns how real people in the real world apply evaluation findings and experience the evaluation process…it is about intended use by intended people.”
The first pre-program collection method will be a full-faculty anonymous survey asking for self-reporting on the regularity of use of ten proposed interventions. The survey will also ask faculty to rank each of the ten proposed interventions on two separate Likert scales:
Question #1: Choose the option below that best matches your professional opinion on the impact of proposed intervention ______________ .
1—the intervention will have little, if any, effect on student achievement
2—the intervention is more likely than not to have a positive effect on student achievement
3—the intervention will have a positive effect on student achievement and teachers should try it out where possible
4—the intervention will absolutely have a positive impact on student achievement and should be used by all teachers
5—there is no question the intervention has a strong, positive impact on student achievement and should be considered ‘non-negotiable’ for all teachers at the School
Question #2: Choose the option below that best matches your current ability to successfully implement the proposed intervention ______________ in your daily teaching.
1—I have significant difficulty in implementing the intervention and do not often use it in my practice
2—I have some difficulty in implementing the intervention which impedes my ability to use it regularly
3—I have little to no difficulty in implementing the intervention but could do more
4—I regularly implement the intervention in my practice
5—I have mastered the intervention and could be a resource to help colleagues do the same
The results from the first question will be tabulated, giving each intervention a total score based on the faculty’s perception of its impact.
The second question will provide quantitative feedback as to which interventions faculty need support in providing; this support could be professional development, classroom resources, or administrative backing. A follow-up written question will be asked to gathers specific examples of what support would be most helpful. This qualitative data will be used to structure professional development sessions and support mechanisms throughout the trial period. This same question will be asked at the 6-month and 1-year marks in order to check in on what supports would help.
At the end of the trial period (at the end of year 2 and year 3), faculty will again be asked to evaluate the impact of the ten interventions using the exact same questions with the same Likert scales. This data will then be compared with the original program data.
The comparison will generate two significant findings based on quantitative data:
1) Which of the proposed interventions increased or decreased in terms of perceived effectiveness to raise student achievement. This will help influence where to put continued focus going forward, i.e. the School should recommit focus on the interventions that have the highest perceived transformative value.
2) By asking faculty to re-evaluate their professional abilities with each of the interventions, the evaluators will be able to quantitatively assess the change in professional ability to implement the different interventions. The goal would be to significantly increase the number of faculty who regularly implement the interventions in their practice over the course of the program period.
Source: Saunders, M. (2012). The use and usability of evaluation outputs: A social practical approach. Evaluation, 18(4), 421-436.
Inspired by the work of Shulha & Cousins (1997), Saunders (2012), and Alkin & Taut (2003), the structure of the evaluation will be predicated on the assumption that the results will be used to improve the structure of the program. In order to generate results that maximizes potential use (Saunders, 422), stakeholders must be involved in the process from an early stage. According to Shulha & Cousins:
“…participation in evaluation gives stakeholders confidence in their ability to use research procedures, confidence in the quality of the information that is generate by these procedures, and a sense of ownership in the evaluation results and their application.”
To this end, here are the central ways we will enhance evaluation use:
Through the strategies listed above, the intention is to generate significant stakeholder buy-in to the process, thereby generating significant receptivity to both the process and implementing the results.
In terms of reporting strategies, all data will be shared via Google docs, with overviews and analysis of the data taking place at full faculty meetings. The administrators responsible for the program will be ensure data transparency. Ultimately, the program is designed to benefit students, and if those benefits do not accrue, that the program will be modified or cancelled.
Sources:
Alkin, M. C., and Taut, S. (2003). Unbundling evaluation use. Studies in Educational Evaluation, 29, 1-12.
Saunders, M. (2012). The use and usability of evaluation outputs: A social practical approach. Evaluation, 18(4), 421-436.
Shulha, L., & Cousins, B. (1997). Evaluation use: Theory, research and practice since 1986. Evaluation Practice, 18, 195-208.
In accordance with the OECD’s Program Evaluation Standards, the following procedures are built into the design of the evaluative structure. The evaluation will seek to meet or exceed the standards in the following areas:
Should any concerns arise about the evaluation during the duration of the program, every effort will be made to address the issue in order to maintain stakeholder confidence.
I look forward to your feedback in our PME 802 Discussion Forum.