This section will give a summary, critical analysis, and draw connections for the data collected. I will discuss the impact the study had on students. I will also analyze my practices and interactions throughout the study. Lastly, I will consider any lingering questions that I had about the data collection methods and the data results.
The data results supported my study by showing that direct instruction improved oral language output production. As illustrated in Figure 2, the numerical data from the paired t-test showed that all students improved from their pretest score to their post-test score. The average growth was 33 percent (see Figure 3).
Figure 2. Paired T-Test. This figure illustrates the pre test and post test results of the seven students in the study. Powered by PIKTOCHART.
Figure 3. Percentage Growth. This figure displays the percentage growth for each student from the paired t test. Powered by PIKTOCHART.
As displayed in Figure 4, the climate survey showed that 100 percent of students strongly agreed or agreed that they felt safe in the classroom, that students were nice, that they had a lot of friends, and that they had fun learning. Also, 100 percent of students strongly agreed or agreed that I was a good teacher, treated students nicely, cared about them, believed they could learn, and listened to their ideas. 86% of students strongly agreed or agreed that they liked school, that they behaved in the classroom, and that other students behaved in the classroom. As for the open ended responses, some things that the students liked about the class were learning, reading, writing, Rosetta Stone, and treasure box. Some things that the students wanted to change were having little headphones and having their other friends in groups, but most of the students said they did not want to change anything.
Figure 4. Climate Survey Results. These figures display students' responses to how they felt about their learning environment. Powered by Google Forms.
My observations revealed which skills students were really strong at and which skills were still very difficult. A few skills that all of my students had mastered were answering yes/no questions and either/or questions. They were also really strong at identifying and describing objects in pictures and using prepositions when asked where something was in relation to something else. They were all very good at speaking with one or two words to name a noun with one adjective. Six of the seven students were able to communicate simple information as well as ask questions. The four older students did a good job identifying what was the same and what was different between the pictures, but the kindergarten students struggled. The three areas that were the most difficult for my students were describing what happened in a video, using sequencing or transitional words, and giving an opinion or feeling.
Figure 4. Conversation Skills. This figure illustrates the percentage of students that mastered certain conversational skills. Powered by PIKTOCHART.
Throughout the course of the study there were a lot of conflicts that arose. The original action plan calendar had planned for seven weeks to implement direct instruction and monitor progress with oral language development. Due to the time frame window that the study fell, there were three calendar conflicts that impacted the study. The first calendar conflict was an inservice and comp day. The second calendar conflict was English Language Proficiency Assessment (ELPA) testing. The study fell within the ELPA testing window, therefore the EL department had to arrange to test all EL students in our building in the four domains: reading, writing, listening, and speaking. Our building ELPA schedule was designed for testing all day on Tuesdays, which took away from six instructional days, during the research window. The last calendar conflict was snow days. This year was one of the worst winters we have seen in a while and we had a total of six snow days, five of which fell within the research window. With all of the calendar conflicts, I expanded the research to eight weeks, but this only gave the research window four additional instructional days because one day was an inservice day. Ultimately, during the eight week research window, 40 instructional days, we lost a total of 14 instructional days and were left with 26 days of interrupted instruction because only two of the eight weeks were full five day weeks. Due to the calendar, we were only able to cover one unit rather than two for the direct instruction intervention, which limited the data. Also, we were not able to work on all of the skills or go as in depth with the skills. I think the climate survey was affected the least; however, students behaviors were not as grounded because they did not have the consistency that they needed.
On top of the calendar conflicts, there were also scheduling conflicts. As an interventionist, sometimes school wide or grade level celebrations take away from students’ time with me. One week was Read Across America week where students were participating in building celebrations and learning about Dr. Seuss. Most of these celebrations were taking place at the end of the day which was when four of my seven students were in group with me. We would have to shorten group so that they could still participate with their classes and the celebrations. Another scheduling conflict was having my three kindergarteners first thing in the morning for language development. In the previous semester I worked with my kindergartners in the afternoon, but this semester it was in the morning. I quickly discovered that working with kindergarteners in the morning was not the most productive idea. My students were still very tired, lacking motivation, and at times emotional. This impacted their learning and the pace of our lessons because students were not as energized and their brains were not awake to take in all of the language we were learning. This morning time slot also affected all of my students due to attendance. Four of my seven students were tardy on multiple occasions. Our morning group took place as soon as students arrived until 8:30 am. Students were considered tardy if they were arriving to school after 8:15 am. All students were absent from group anywhere from one to six times throughout the study window. One student left for a month towards the end of the study to go to Guatemala for a family emergency. For emerging students, language development is so important and scheduling it in the morning was not the best choice.
Another component that impacted students’ learning was student behaviors. During groups I had one particular student who needed a lot of redirection, time, and attention. There were lessons that went really smooth and quick and there were lessons that had a lot of interruptions, redirections, and social side teachings. I am not one to let a student with behaviors be dismissed, but rather embraced, encouraged, and taught how to make better decisions and be part of our group and part of learning and being successful. This does take time away from content, however, all students are learning a great social component of how to help others even when they are struggling. This impacted our lessons throughout the study because we were not always able to accomplish getting through the content.
All of the data was interconnected according to my literature review, which said that students are not able to acquire a new language if their affective filter is still high. According to my climate survey, all of my students involved in this study had a low affective filter. They felt safe, motivated, interested, and involved in their learning. Based on the literature review, if their affective filter was low they should be able to acquire a new language better.
Of my three kindergartners, the student who strongly agreed with everything on the climate survey also scored the highest on her post-test assessment, and spoke a lot during our breakfast conversations. This trend then followed with the other kindergarteners. The student who scored the next highest post-test assessment score strongly agreed and agreed with everything and the student who scored the lowest post-test assessment score also disagreed with some questions on the climate survey.
As for the two first graders, they were very similar in all three data collection areas. On their post-test, one student missed three items and one student missed two. On the climate survey, one student strongly agreed with everything and one students strongly agreed with all except one that they simply agreed with. In terms of their oral production skills, they were also very similar in their abilities and the amount in which they presented. Another piece to that was that they shared the same culture, first and second languages; one is a first year EL student and one is a newcomer. Ultimately, their lives in and out of school and the United States were also very similar, as well, and I think their data reflected that.
The second graders were also very similar, however their cultures, languages, and lives were very different. The second graders were developmentally better at expressing their own opinions and feelings. On the climate survey, they mostly agreed with everything, strongly agreed with two items, and then one student disagreed with one item. The student that disagreed with liking school also missed one question on their post-test assessment. The other student strongly agreed with everything. Both of these students had their own struggles with language skills and due to their different backgrounds, showed strengths in different areas. According to my observations one student was better at describing items in a picture and the other student was better at identifying similarities and differences. Also, one student was better asking questions and the other student was better at expressing their opinion and feelings.
Overall, the triangulated data showed that direct instruction improved oral language production for emerging language learners. It showed that students’ grade level and cognitive development lead to higher assessment scores. It also showed that newcomers scored the lowest, students born in the USA scored the next lowest, and first year students scored the highest. This was realistic because some students that were born in the United States did not have a strong foundation in either their first or second language, as compared to first year students who could have a strong foundation in their first language since they lived, and possibly attended, school in an environment that supported their first language.
According to the data collected and analyzed, direct instruction had a strong impact on students’ language acquisition in their emerging stages of language development. Also, building a learning environment where students felt safe, motivated, valued, listened to, and included increased their ability to acquire a language. Therefore, pull out direct instruction was also impactful for students because their learning environment was smaller, less threatening, and lower-stakes for them to try new things. Lastly, the content and data being collected was designed and differentiated for the students’ language development levels. This helped students to be motivated and find success because the content was not too difficult for them. The content was presented based on their current ELPA scores and the skills that they should be working on at that language development level.
When analyzing my instructional and research practice, along with those of my mentor whom helped assess students on their pre and post assessments and gave the climate surveys, I felt that all students were given the same information. The post-assessment was scripted so both of us were able to present the questions and information in the same way without skewing the data. However, I noticed that on questions 14 and 15 more than half of my students answered incorrectly (see figure 5). This was due to the language concept of adding the article “a” before a singular noun. Most students omitted the “a” in their response. Due to a majority of students missing this concept, I should have put more emphasis on adding the article before a singular noun. As the researcher, I know that if more than half of my students miss a question, I should reflect and understand why. By reflecting, I will be able to teach the concept more effectively the next time. For the climate survey, we used visuals to help students understand the Likert scale. This was an important component because a lot of students were able to quickly identify if they strongly agreed, agreed, disagreed, or strongly disagreed based on the visual components. I also chose to give the climate survey to my newcomers in Spanish. I chose to do this because I was curious about how they felt about school rather than their ability to speak to me in English. I felt that this was a valid choice to make because I was able to accurately get their feelings and opinions about their learning environment.
Figure 5. Assessment Questions. This figure displays the number of students that incorrectly answered each test question. The questions that all students answered correctly are not shown in the graph.
A few questions that arose after I collected the data were, if a student was not agreeing with the questions on the climate survey what would their scores and oral production look like? I was curious to see if the triangulated data would still display the same results for a student with a high affective filter. In theory, if a student disagreed with most of the climate survey they would have a high affective filter and could possibly score lower on the pre and post-test assessments than the students with low affective filters. They would also struggle with their oral language skills because they would not be motivated or interested in practicing. I was not able to explore the relationship between a high affective filter and assessment scores because none of my students displayed a high affective filter. Another question that arose was, would the data stay consistent over multiple unit assessments? I know that for this unit assessment the data supported itself across all three collection methods, however the sample size was rather small. If there were more students involved and more grade levels involved would that impact the data in different ways? I think it would impact the data in different ways because the skills that emergent language students need to work on change between grade levels. Also, older students, especially students in their pre-teen and teen years oftentimes have a higher affective filter. Also, I am still curious to see if students improved on their ELPA scores that come back in May. I am curious to see what the growth looked like for the students that were involved in my study, compared to those that were not involved in my study. I predict that the students that were involved in my study will show growth in areas of speaking and listening and hopefully score closer to progressing. If this does happen, next year I can focus on reading and writing for these students rather than their speaking and listening skills.