Overall the data showed evidence that the implementation of targeted questioning did improve my students' reading comprehension as well as my own comfort level when planning reading instruction.
The first piece of data that supported the idea that targeted questioning improved my students' reading comprehension was the data collected weekly from selection test. Each reading groups' average was calculated each week and recorded. All of the groups except for my extremely above grade level group showed a positive trend with mostly improving scores as targeted questioning was being conducted. There were two weeks where most groups showed a decline in the data. One week was the week of February 23rd in which I missed a few days of instruction due to illness and instruction was delivered by substitute teachers. Another week that showed a trend of decrease was March 2nd. This week’s story was confusing for me as a teacher, and I did not think the comprehension questions on the test were high quality based on the text itself, but because it was a district mandated curriculum I was required to teach it. Other than those two weeks most of the selection test data for the on and below grade level groups showed a positive trend as the implementation of targeted questioning went on. As for my highest group, they scored 100 percent on all of the selection tests because they were well above the skill levels that were being assessed. I think it is important to note as well that they were also receiving additional instruction from the second-grade curriculum to meet their high needs.
The second piece of data that supported the idea that targeted questioning improved my students' reading comprehension was the data collected from my weekly whole group questioning log. Each week when we read the weekly story aloud I asked targeted questions to a specific sample of my students who I believed represented the class. I coded their responses based on how complete and accurate their answer was. From the beginning of the study the implementation of targeted questioning seemed to help guide students to high levels of answers, with a majority of the answers falling in the 2 and 3 range. As the study went on I found I was getting more and more level 3 answers and less level 1 and 0 answers. This showed that the impact of targeted questioning initially boosted students reading comprehension and that over time, targeted questioning increased the students reading comprehension even more.
The third piece of data that supported the idea that targeted questioning improved my students' reading comprehension was the data collected from biweekly running record comprehension checks. When I first implemented targeted questioning all of the sample students’ comprehension scores were less than 70 percent. As the implementation of targeted questioning occurred within their small groups, most students showed a positive trend in improvement of their reading comprehension. The targeted students from each group finished the last running record comprehension check with a score at or above 90 percent, showing great improvement in their reading comprehension. Again, I think it is important to note that the drop in scores during the week of February 23rd could be due to my illness and a different type of instruction that was delivered by the substitute teacher.
Another data collection point I used to monitor my own instruction and comfort level of implementing targeted questioning during my reading instruction was Fountas and Pinnell’s Self Assessment for Guided Reading. I took this assessment before and after implementing my action research and found that I had made great growth in both my understanding and level of comfort when planning and instructing small reading groups. I found that this study caused me to be much more intentional about my planning and instruction during small group reading time. This intentionality lead me to feel more prepared and more comfortable in conducting effective reading groups. When I took the assessment before implementing this research most of my answers to the survey fell in Categories 1 and 2 with only a few answers falling in the 3rd category (the most advanced category). Category 1 answers are considered an inconsistent demonstration of successful guided reading groups. Category 2 answers are considered a proficient demonstration of successful guided reading groups. Category 3 answers are considered a distinguished consistent, demonstration of successful guided reading groups. When I took the assessment after my research I found most my answers falling in Categories 2 and 3 with a majority of my answers falling in the 3rd section. This shows that not only did my students reading comprehension grow from this study, but so did my ability to effectively plan and implement reading instruction.
Questions I had after completing this action research include: