Data Collection

Weekly Selection Tests

I used weekly selection tests to measure students' comprehension of the district mandated story we were working on each week. Each selection test had six multiple choice comprehension questions about the story we had worked on that week. I chose to use these tests as one piece of data because it aligned with the district curriculum as well as provided quantitative data on students' comprehension of the whole group story. I then took the averages of the students' scores based on their leveled reading groups to see how each group was progressing in their reading comprehension each week. The red group was significantly below 1st grade reading expectations, the yellow group was slightly below 1st grade expectations, the green group was meeting 1st grade expectations, the blue group was slightly above 1st grade reading expectations, and the purple group was significantly above first grade expectations. The y-axis reflects each group's overall average score on that weeks selection test. The x-axis reflects the weeks of progression through the study.



Weekly Whole Group Questioning Log

I also wanted qualitative data on the students' reading comprehension of the weekly whole-group story. To achieve this, I kept a log of anecdotal records and observations of the students' ability to answer targeted questions. I asked an average of 15 questions for each story. Along with keeping notes, I coded each response with a 0-3 (with 3 being the most complete and accurate) rating based on how completely and accurately they answered the targeted question. I then created the graphic below to show the percent of questions answered at each level by week. The y-axis reflects the percentage of questions answered at each level. The x-axis reflects the coded level of answer each week of the study.


Biweekly Running Record Samples

In addition to whole group data I also wanted to see how students' comprehension was progressing in books that were appropriate for their reading level. To achieve this I took biweekly running records of one student per reading group. I administered the running record as well as beyond, within, and about the text comprehension questions, similar to what they would see during a Fountas and Pinnell Benchmark Assessment. I coded their responses to the questions on the same 0-3 rating scale as described above and then divided their score by the total number of points possible to create a percentage. I used this percentage to see how their comprehension on these running records was progressing overtime with the implementation of targeted questioning, with the hopes that their learning was reflective of the groups' learning as well. The y-axis reflects the percentage of comprehension questions answered correctly. The x-axis reflects the selected students' scores from each reading group over the course of the study.


Fountas and Pinnell's Self Assessment for Guided Reading

I also took Fountas and Pinnell's Self Assessment for Guided Reading before and after beginning my research. This is a self assessment tool for a teacher to evaluate the effectiveness of his or her guided reading program. I used this for qualitative data on myself as a teacher to evaluate how I felt about my guided reading instruction, with a particular focus on the planning aspect, before and after implementing my action research.