Teachers have been providing students with traditional oral and written feedback, but may not yet embrace video feedback via screencasting, which may be a more effective method for formative assessment, particularly with regards to digital production.
The purpose of this study is to identify if teachers are using screencasting software as it has been found to be an effective measure to provide students with feedback for digital production. A literature review was conducted to determine the many ways screencasting can be a beneficial practice for teachers to use when providing feedback for their students. Not only has it been proven to be effective, screencasting for student feedback may be preferable to traditional methods of feedback, such as in-person conferencing and written feedback. Findings from this study may be significant for administrators crafting curriculum models and assisting in the determination of professional development options with regards to improving the teacher to student feedback process. This study may be pertinent for teachers considering ways to improve student achievement and enhance the learning process by incorporating alternative methods of feedback into their routine.
QUAN: The target population to participate in this cross-sectional study was up to 300 teachers assigned to students in grades 5-12 in public schools throughout the state of New Jersey. In order to verify the findings of qualitative data, the recommended sampling size is around 300 to lend credibility for the findings as more data collected will provide a more accurate representation of the population at-large (Creswell & Guetterman, 2019). One sampling criterion included teachers who are assigned to grades 5-12; since this study is designed to address how to best support student-generated digital content, grades of K-4 may not be applicable as no such content is created or requires the same depth in the feedback. New Jersey has been selected as the geographical region as it is the most densely populated state in the nation; this should provide a proper sampling representing the national population with regards to demographics such as socioeconomic and cultural conditions.
Qual: For the qualitative phase of research, the sampling of participants was based on the responses to the survey questions. Any responses that indicated that they were or may be willing to be contacted for an interview and are familiar with screencasting software were eligible to be contacted for further research. The researcher was hoping for an overwhelmingly positive response, so in advance additional criteria was added using factors such as participants who assign a high frequency of digital production tasks to students and/or the preference to include formative assessment practices. In this purposive sampling approach, the researcher was seeking 20-30 participants to interview so the phenomenon could be more thoroughly be explored. Responses from interview participants were compared with results from the quantitative study so it further tested for consistency using methodological triangulation (Patton, 2015; Creswell & Plano Clark, 2010).
This research proposal was first approved by the Institutional Review Board (IRB) at New Jersey City University.
QUAN: Following approval, some administrators of New Jersey public schools were contacted to determine their willingness to participate, using the researcher’s networking connections. Administrators who agreed to participate contacted their staff using a predesigned “blurb” to copy and paste about the survey with a hyperlink to directly access the form via Qualtrics. Other participants from the state were sought using social media platforms such as Twitter using hashtags such as #njed and #njschools as well as Facebook groups representing New Jersey organizations such as NJ Educators Unite, NJ Teachers and NJEA. Participants were able to partake in the survey via a live link posted on their group page or as a Tweet in the relevant hashtag thread. Additional means for survey distribution include further networking opportunities through contact of NJCU alumni cohort members and via EdCamp attendance while in the presence of other professionals in the field who have regular contact with teachers representing grades 5-12. The researcher used these networking opportunities to promote the survey further using the convenience sampling method. Those who agreed to participate in the survey through one of these recruitment methods provided a theoretical or concept sampling for the researcher to understand a teacher’s perception of feedback methods, frequency and types of digital content tasks and their personal experiences with screencasting software (Creswell & Guetterman, 2019).
Qual: A purposeful, nonprobability sampling method of group characteristics was used as the researcher was seeking to obtain pluralistic perspectives on existing obstacles, limitations and benefits of screencasting technology for the provision of student feedback with regards to digital production (Patton, 2015). This investigation was conducted through interviews that explored perceptions and potential difficulties that teachers face when providing students with feedback. There were ten guiding questions provided for the interview that the researcher referenced to engage with participants about their existing practices with the provision of student feedback. Participants were provided with a copy of the framework questions 24 hours in advance with a reminder of the interview scheduled, so they could prepare if they desired. Participants selected for the interview were contacted by email or text, according to preference and scheduled at mutually agreeable times using the app Calendly. When a mutual interview time was scheduled, the researcher received notification so preparations could be made. In the reminder of meeting email, the Letter of Consent was provided as a hyperlink and permissions were collected via Qualtrics. This letter delineated guidelines, procedures and safeguards of the interview process. Zoom was utilized to record responses as most participants have probably already used the program and are familiar and comfortable with the interface. An audio transcript was downloaded from Zoom and uploaded to Otter.ai to develop a written transcript. Once corrected, the transcript was then uploaded to Atlas.ti coding software so the researcher can review and analyze textual responses first using open code. Then the researcher analyzed the open codes and began to use axial coding to identify relationships as well as patterns to determine what themes are present (Creswell & Guetterman, 2019). Similarities and differences were considered between the data as keywords and phrases that were coded as themes or relationships emerged. Noteworthy information was sought for providing data to further explain the findings from the analysis. Supportive or significant quotes were saved as memos for each participant and were reviewed as consideration was made about if and how the quotes should be included in the findings. The approach was a qualitative inductive analysis in that new concepts were being learned during constant comparison, a technique used in Grounded Theory (Glaser & Strauss, 1967).
The transcript was emailed back to the participant to verify and included the audio file so participants could listen to the interview again if desired, as a link via Otter.ai. This member check allowed participants to suggest revisions of the transcript, allowing them to verify the data collected
The researcher provided assurances to the participants that the data is for research purposes only, explained the purpose of the research and the design of the study. A Letter of Consent was signed for both phases that reminded subjects that their participation is voluntary while guaranteeing strict confidentiality with regards to participant identity (Creswell & Creswell, 2018). The roles of the teacher was used in lieu of actual names and any unique identifiers removed to protect the participants and districts in the final report.
Quantitative and qualitative data sets were analyzed in separate phases initially, and then cross-referenced, so new findings of the phenomenon may be discovered and validation of data from one set to the other will occur. This is when the interpretive process begins where both data sets are mixed at the point of interface to determine the relationship and how the qualitative data can support the findings from the quantitative analysis and build on it (Creswell & Plano Clark, 2010). In interpreting the data using constant-comparison, the development of findings can be constructed by having one data set support or expand on the findings of the other (Creswell & Plano Clark, 2010; Onwuegbuzie & Leech, 2006). Any conclusions the researcher draws from the findings were more detailed and precise, by using the qualitative data to explain or build on the results of the quantitative data (Plano Clark & Ivankova, 2016). Through cross-referencing of each data set, patterns found determined emergent themes that aided in understanding the phenomenon (Patton, 2015). In employing both quantitative and qualitative methods of research, the strengths of each approach can support the findings, how responses are similar and different, and converge and validate each other (Dawadi et. al, 2021).