The data was collected in a triangulated model to show connections between information, as well as areas that were and were not supported through all data points. The three data collection methods used were a climate survey, anecdotal notes from observations, and a pre/post paired t-test. These three methods were chosen because they measured the affective filter, direct instruction, and oral output production. The focus of this study was to determine if direct instruction strategies would increase students’ English speaking proficiency. Within direct instruction, some areas of focus included lowering the affective filter, vocabulary, input comprehension, output production, and oral corrective feedback.
The climate survey consisted of 12 likert scale questions and two open ended questions. The survey was conducted one-on-one with students using Google Forms on the computer. Six surveys were conducted in English and two surveys were conducted in Spanish. The two surveys conducted in Spanish were for newcomer students. The likert scale responses included, strongly disagree, disagree, agree, and strongly agree. Visuals were used to explain the likert scale responses for all students. Visuals varied depending on the teacher giving the assessment. One teacher used four emotion faces to describe the scale for all students who were given the survey in English and the other teacher used hand motions to show yes a lot, yes, not really, and no to go along with the Spanish translations.
The anecdotal notes were recorded in a brief checklist table. The table separated ten skills into a yes/no checklist for each of the seven students and there was space for additional notes. The ten skills encompassed answering yes/no questions, answering either/or questions, describing what was in the picture, describing what was the same/different in the pictures, describing what happened in a video, describing objects using prepositions, describing things using sequencing and transitions, giving an opinion/feeling, and communicating simple information. Then, as students were working throughout the breakfast group conversations were recorded on video to go back and observe at a later time. Moreover, in other groups throughout the day observations were made without video recordings. Anecdotal notes were organized into the checklist table.
The paired t-test was the unit four assessment from a direct instruction intervention. The assessment was given orally and was conducted according to the assessment script. There were 27 questions in five different categories. The categories included: information, actions, part/whole, prepositions, and opposites. The teacher assessed students one-on-one and was very particular about responses. If students did not respond with the correct answer as stated by the script then the response was scored as incorrect. The assessment was not given to each student on the same day because four students participated in one group while the other three students participated in another group.
I chose these three methods to collect data so that I could gather data from different areas of direct instruction. Direct instruction is a very broad term and when working with EL students, it breaks down into several categories. The five areas that arise in alignment with language acquisition are lowering the affective filter, vocabulary, input comprehension, output production, and oral corrective feedback. My data collection methods were dispersed to measure different areas of direct instruction.
This method focused on the affective filter and understanding how students felt in regards to their learning environment. According to research, the affective filter can be the most important factor when learning a new language. If students do not feel motivated, confident, safe, accepted, and involved then they will put their guard up which can prevent them from acquiring a language. Therefore, it was important for me to evaluate how my students felt in regards to their learning environment in order to determine whether or not their affective filters were impeding on their language acquisition and oral production.
For this method, the whole point was to gain understanding about how my students felt about their learning environment. This was not a measurement of their English skills and proficiency. Therefore, I chose this method and implemented accommodations such as first language support and visuals, either emotion faces or hand gestures in order to obtain the most accurate responses from my students. Six students were given the climate survey in English with the accommodation of emotion faces because they were able to understand the questions and then respond with reference to a face about how they felt. Two students were given the survey in their first language because they were newcomers and the questions would have been difficult to understand and respond to based on their English proficiency. They were also given the accommodation of hand gestures to describe how they felt because the translated version used vocabulary that fit the hand gestures better than the emotion faces. I also used this climate survey because it was designed for students that were between the grades of kindergarten and second grade. It was also created and approved by the CADRE cohort and given to the CADRE teachers to utilize in our research.
Anecdotal notes and observation included data collection on a wide range of beginning and early intermediate level language development skills being applied in several situations. Beginning level EL students were working on answering yes/no and either or questions, describing what they saw, identifying similarities and differences and communicating simple information. Early intermediate students were also working on describing situations using sequencing and prepositions, asking questions, and expressing an opinion. Due to situations changing frequently and wanting them to be more authentic, observing students, recording their conversations, and recording anecdotal notes was the best method to collect authentic data on my students. Also, in order to collect data in regards to the language development level skills, I implemented several activities aimed at specific skills to evaluate how my students performed for each skill.
I chose to record these conversations because there were so many components to their language that I could focus on. By having a recording, it allowed me to hone in on those components separately rather than simultaneously. I also used the recordings to look at the skills across the weeks as opposed to just one each week. At times students may not have been able to ask questions, but a few weeks later when their interests were sparked they could form several questions. The recordings were best for my students because it allowed me to really analyze their conversation skills and what areas they needed to work on individually.
A paired t-test was chosen to compare my students’ results both before and after the direct instruction intervention unit took place. Students took the exact same oral assessment one-on-one with a teacher prior to learning the material and after completion of the unit. I also used this data to evaluate the growth percentage for different categories of students. Overall this data was chosen in order to see if students’ scores of oral language improved due to the direct instruction. Ultimately, the paired t-test seemed to be the least biased test as well because it was not based on students’ opinions or feelings nor was it based on the teacher’s observations.
I also chose this method because it was supported by my district. My district utilized this intervention with EL students and the intervention already had post tests embedded. I used the assessment four post test that was already created, designed, and supported by my district and the intervention for both a pre test and post test assessment to show the growth that my students made before and after the material was given. I also used this assessment because it was scripted and given the exact same for every student.
Over the course of the study, students’ progress was consistently being monitored to ensure that they meet the end of the unit goals. Everyday for every group I started with a quick good morning and a handshake for each student either given by myself or by another student so that I could monitor how my students were doing when they entered my room. This quick check related back to their affective filter and if they were motivated, confident, had a good attitude, or if something else was going on that I needed to take time to help them with. The way my students entered my classroom, each day, determined how I moved forward with my lesson. If I had a student that was excited to share a personal story with me then I let them. If I had a student that came in with their head down and their arms crossed I pulled them aside and talked to them about how they were feeling. If I had two students coming in giving each other dirty looks I pulled them both aside to resolve the conflict. Also, throughout the lessons, if I had a student trying to answer a question or share something orally, that was difficult for them, and other students started to laugh at them I immediately stopped the group. I would explain to my group that it was not okay to laugh at someone who was trying to learn and try something new. I would also put it into perspective for them and ask them to think about how they would feel if someone laughed at them when they were trying to do something new. Then I would say, “If you laughed at them, you owe them an apology and you need to say sorry now.” My students always stopped, apologized, and then we would move on. After experiencing this protocol, I have noticed them move through the process without prompting; they would laugh, stop, and apologize, all on their own, without any direction from me. My students knew that I had created a safe and respectful learning environment for us to try new things without feeling nervous or embarrassed which helped them to lower their affective filter and acquire a new language.
I also monitored progress with each of the different activities during breakfast club by walking around the room, observing conversation, providing corrective feedback, and pausing activities to clarify directions. As I observed activities, if students seemed unclear about what to do or if there was a common error taking place, I would stop the group, give a clear explanation, or even model if needed. I would also listen to conversations, provide one piece of corrective feedback, and then help students implement the feedback correctly before moving on. For example, one time we were doing a barrier game and students were describing what was in the picture using a color and a shape. However, a lot of students described the picture using the shape first and then the color rather than the color and then the shape. This was a very common mistake for students who knew another language because the syntax structure in their first language most likely put the noun before the adjective, unlike in English. In this situation, I paused their activity and taught them to structure their descriptions with the color first.
Lastly, I monitored progress throughout each lesson of the direct instruction intervention. At the end of each activity within the intervention, each student was provided a turn to share their knowledge. All students needed to respond to the concept correctly before we progressed forward, to the next concept. This checkpoint formatively assessed to make sure that students were grasping the material and concepts before I added more. This also allowed me to give specific individual corrective feedback to students. For example, if a student was not able to answer the questions correctly I would have the student listen while another student or two did their turn, and modeled, and then I would go back to the struggling student. This was helpful because students were able to have a peer model, listen to the structure and content again, and then be successful before we moved on.