The fourth pillar, Assessment, is an important and prevalent topic mentioned throughout the LTS program. Assessment refers to the calculation and observation of a student level of proficiency and/or understanding. They can be formal and informal, as well as formative and summative. LT 549: Measuring Language Ability introduced various concepts of assessment, but the most prevalent ones were the principles of assessment.
When developing an assessment, there are many aspects that should be taken into account. The practicality of the assessment refers to how feasible the assessment is in terms of creating it, administering it to students, and evaluating the assessment after it has been completed. The second factor refers to the reliability of the assessment, which looks at the consistency of the skill that is being measured. One of the most complex principles, validity, comes in many forms: construct validity, criterion-related validity, face validity, content validity, and consequential validity. The two that I focused on the most were content and construct validity. Content validity asks if the content is appropriate for assessing what it is intended to measure, whereas construct validity asks the same question but refers to the construct of the assessment rather than the content inside of it (Brown & Abeywickrama, 2018). Authenticity refers to how much the assessment mirrors situations or occurrences that happen in the real world (Bachman & Palmer, 1996; Brown & Abeywickrama, 2018). I enjoyed building exams and tests that matched this principle because it not only assesses students for their knowledge but gives them practice with producing language that they may have to use outside the classroom. The last principle we discussed was washback, which comments on the consequences of the assessment, which could be positive or negative for the teacher and the students (Brown & Abeywickrama, 2018). When creating assessments, I would reflect on these principles and weigh the benefits and drawbacks of each principle to examine the quality of the assessment that I produced.
The first artifact is an assessment titled Writing Assessment Creation Activity that I constructed in LT 549: Measuring Language Ability. This assessment focuses on testing students’ writing skills in English. In this context, students were at the novice-mid level of proficiency, according to the ACTFL proficiency guidelines, and were university students who resided in Japan learning English as a foreign language. Students were asked to answer a writing prompt: “Think about a trip that you went on. Where was it? What did you do? How long was it? What was your favorite part? Would you want to go back? Describe your trip in 50 to 100 words.” Students are given the prompt ahead of time and given a few days to work on the assignment at home. Allowing students to prepare can make students' responses more reliable (Brown & Abeywickrama, 2018) and alleviate language anxiety and stress surrounding exams. Language anxiety is an issue for learners and can influence learners to second-guess their choices and decrease their proficiency in the target language (Ellis, 2019). Language anxiety can sway students into second-guessing their first answer, which was oftentimes the correct answer, and leads them to choose a different one, which ended up being incorrect (Horwitz et al., 1986).
The second artifact is another examination from LT 549: Measuring Language Ability, Speaking Assessment Creation Activity, which focuses on assessing students' speaking skills in the target language. This assessment works as a proficiency test to see if learners have reached the next proficiency level. As for the first artifact, students are EFL learners at the novice mid-level of proficiency, according to the ACTFL proficiency guidelines. For this exam, students are given a prompt and a situation where they will participate the following week. This assessment utilizes the principle of authenticity to create a situation that learners may experience in the real world. The test asks students to provide a job description and a reason for being late; both descriptions come from the "I Can…" statements created by ACTFL. Creating an assessment that uses these statements increases its content validity. Also, as mentioned above, allowing students to prepare for this assessment by being given the prompt creates an opportunity for students to be reliable due to the decrease in language anxiety and pressure (Brown & Abeywickrama, 2018).
The third artifact from the LTS program is a lesson plan I created before taking LT 549: Measuring Language Ability. I created a lesson plan, Negotiations and Disagreements in Business Communication, in LT 536: Design for Learning Language Systems with my partner, Bibi Halima. In this lesson, students are at an intermediate high level of proficiency, according to the ACTFL guidelines, and are in an EFL course that specializes in communication in business environments, which takes place at a university in Pakistan. One of the significant activities in this plan is that students are asked to participate actively in a debate circle and are expected to use formal forms of communication that would be expected in a business situation. During this activity, the teacher(s) would observe the debate to assess students' understanding and whether the learning objectives are being met. I learned it is important to remember that assessments can occur in many different ways, whether summative or formative, formal or informal, and whether students are aware they are being assessed. This exam is an informal formative assessment. The teacher(s) observe students while focusing on finding areas for improvement and provide feedback for the teachers as to what they should focus on for upcoming lessons (Brown & Abeywickrama, 2018). In this artifact, students do not know that they are being assessed, so the teacher(s) can observe their output without adding stress to help them decide whether the learning objectives are being achieved and whether more practice would be beneficial.
The LTS program gave me many chances to learn how best to assess my students, whether through a formal exam or classroom observations. I also examined tests for benefits and drawbacks that other instructors and test makers created. I see myself performing assessments through observations and examining pre-made assessments more in the future rather than creating them myself, due to my future teaching context. However, I am glad that I received the opportunity to practice producing them. In the future, I plan to develop assessments that can benefit my students and help prepare them for producing the target language outside of my classroom.
References
ACTFL. NCSSFL-ACTFL can-do statements proficiency benchmarks. (n.d.). https://www.actfl.org/uploads/files/general/Resources-Publications/Novice-Can-Do_Statements.pdf
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests (Vol. 1). Oxford University Press.
Brown, H. D., & Abeywickrama, P. (2018). Language assessment: Principles and classroom practices. Pearson.
Ellis, R. (2019). Understanding second language acquisition (2nd ed.). Oxford University Press.
Horwitz et. al. (1986). Foreign language classroom anxiety. The Modern Language Journal (Boulder, Colo.), 70(2), 125–132. https://doi.org/10.1111 /j.1540-4781.1986.tb05256.x