This module is focused on program evaluation: what the process is, how to conduct it, and the resources you'll have access to.
Why program evaluation? Most researchers want their findings to have an impact on the field. Many of you work in the field where you might want to know whether the interventions you use are actually working.
Learning outcomes: You will be able to explain program evaluation and why it is important.
What we will do: Review some definitions and watch a YouTube video by Bree! You can reflect privately or on Discord!
This module will take 20-25 minutes to complete.
Program evaluation is a process of seeing how a project is designed and how effective it is at following that guiding model. The two words of the term kind of set this up: program - an activity being done with a long-term aim; and evaluation - reviewing the underlying principles. A program evaluation can be done from inside or outside of the group. In our case, Urban is coming from outside as a partner organization to help your Credible Messenger programs with program evaluations!
Another part of program evaluation is finding where a group may be struggling and finding possible ways to help out. This can include suggesting alternative ways to approach an area of struggle, redirection of resources, changes in policies, and other program adjustments. Overall though, the purpose is to best serve the participants in a program, so they can get the most out of it!
Program evaluation is a specific type of research that it is more connected to the purpose of informing practice (such as a program) whereas the nuance with research is on methodology to test, observe, or confirm.
Watch this video about program evaluation and policy analysis with Bree!
*Note: We now use the term "interest holders" instead of "stakeholders." Read more about why here.
Here is a great guide from the National Reentry Resource Center that defines the different types of program evaluations and how evaluation results can be used to make program improvements.
We're doing a couple things here.
Training current CM facilitators and administrators on how to do their own program evaluations
Conducting a program evaluation to help determine effective and ineffective parts of programming
This program evaluation itself is divided into three steps!
Establish the framework of change (covered in depth in the next module!) and plan of change behind mentoring programs
Collect a large range of data from mixed sources
Assess the implementation according to the plans of CM mentoring programs
This first point is central! When all is said and done, we want to leave you better equipped to review your own programs, highlight successes, and heal areas for improvement.
Lots of frameworks to uncover here...
What are we evaluating?
A good evaluation covers pretty much all aspects of a program's activities. First, we'll establish an understanding of the logic models and theories of change (covered in the next module!) guiding the programs of study. This gives us a picture of what you envision as the actions behind the changes you observe in mentees from mentoring.
The second big part is the meat and potatoes of the evaluation. We'll collectively gather a large amount of data from interviews, focus groups, and program/administrative paperwork and documentation. We'll discuss focus group plans, instructions, and expectations in later modules! Gathering such diverse data will give us a lot of information about programs, meaning a lot of work to do to collect and analyze!
The third and final part is using the data collected to determine to what extent a program is working as intended. This includes a lot of analysis of a lot of data. We'll use statistical skills and tools to help us out.
Recall our discussion of qualitative vs quantitative data. Which of our data sources will fall under each category?
Your Role in the Evaluation
As community researchers you have an equal hand in gathering data and analyzing it! We'll rely on your expertise to find the right people for interviews and focus groups. Likewise, you're the experts on what administrative and program data is relevant!
When it comes to the analysis, we'll be using a lot of qualitative data, meaning we'll all work together to pick out themes and design our plan for analysis. We'll look for common themes through interview and focus group notes, pick out the key words, and finally enter everything into a special software called NVivo. This will help us look at trends and describe our findings in more detail.
In the Discord discussion for Module 12, why is it important to evaluate programs or policies? Do you have any examples of how program evaluation helped or perhaps brought about new ideas?
The Takeaway: We'll be doing a rigorous program evaluation. That means finding the plan and seeing how well things are going according to that plan. We're going to gather a lot of data from many sources, meaning we have to analyze it all. All of this work will help us identify the successes and struggles of hosting your CM mentoring, all with the goal of sharing how to make better programs!