Two years ago, when I first heard the word "assessment," only "tests" came to mind. Now, after extensive study, I can proudly say my understanding has expanded significantly. "Tests" evolved into "types of tests," with concepts such as formal, informal, summative, formative, validity, practicality, reliability, authenticity, and washback. Through the LTS program, I've learned that assessment is a critical component of effective language teaching and learning, involving the systematic gathering of information about students' knowledge, skills, and progress. When designing a curriculum, I now ask myself, "What do I aim to assess?" Understanding that assessment covers many aspects and isn't always straightforward, I've come to appreciate its crucial role. Assessment is vital for evaluating students' proficiency, identifying areas for improvement, and guiding instructional strategies. In the following section, I'll demonstrate how I've applied these concepts to create assessments for various contexts.
LT539 - Assessment Creation Activity: Writing
LT549 - Measuring Language Ability
LT549 Measuring Language Ability gave me a deeper understanding of the principles for creating an effective assessment and the resources I could use for various contexts. This Assessment Creation for Writing Skills was designed for university students who are preparing to study abroad in a Chinese-speaking country. It is a formal, achievement test (Brown & Abeywickrama, 2019) that aims to evaluate learners’ writing skills in the target language. This writing assessment exemplifies the concept of construct validity (Messick, 1989) by clearly defining the skills and constructs to be measured. The writing task is designed to assess students' abilities in expressing ideas coherently, using appropriate vocabulary and grammatical structures, organizing ideas logically, and demonstrating cultural awareness in written communication. This assessment also demonstrates my understanding of assessment purposes, specifically distinguishing between achievement and proficiency assessment. Framing the writing task as both an achievement measure and a proficiency indicator, shows awareness of the multiple functions assessments can serve in language education.
The rubric for this artifact reflects the use of analytic scoring (Badia, 2019), which is a key concept in writing assessment. By breaking down the evaluation into specific criteria such as content, organization, vocabulary, grammar, and cultural appropriateness, the rubric allows for a more detailed and nuanced assessment of students' writing abilities. This approach not only enhances the reliability of scoring but also provides more specific feedback (Vásquez & Harvey, 2010) to students on different aspects of their writing. Furthermore, the discussion section on reliability (Bachman, 1990) and practicality (Mousavi, 2009) showed my awareness of the challenges inherent in assessment design. While scoring open-ended writing tasks, there are possibilities for potential subjectivity which may cause reliability issues. Therefore, rater training is necessary for language assessments.
LT549 - Measuring Language Ability
Assessment Creation Activity for Speaking is a project I created in LT549 Measuring Language Ability. It includes an interactive speaking task in the form of a role-play or interview that aims to assess university students’ speaking ability in Mandarin Chinese. It reflects the understanding of construct definition in speaking assessment, focusing on specific skills like conversational ability, fluency, accuracy, and appropriate vocabulary use. By clearly outlining these constructs, the assessment design ensures that it measures the intended language abilities, enhancing its construct validity (Messick, 1989). This assessment particularly aligns with the principle of authenticity (Bachman & Palmer, 1996) by using a role-play scenario that simulates real-world interactions. This approach not only enhances the face validity (Messick, 1989) of the assessment but also provides a more meaningful context for students to demonstrate their speaking skills.
As assessing speaking skills could be tricky, an analytic rubric was used based on the ACTFL Proficiency Guidelines for Speaking, also known as the Oral Proficiency Interview (Breiner-Sanders et al., 2000), to ensure objectivity and avoid bias from the evaluators. This aligns with the standardized proficiency frameworks, which is an important concept in language assessment. The decision to use this framework is because it not only provides a well-established basis for evaluating speaking skills but also allows for comparability with other assessments based on the same standards. Moreover, the discussion of reliability (Brown & Abeywickrama, 2019) and validity (Messick, 1989) challenges in this speaking assessment show my understanding of these principles in practical application.
LT539 - Assessment Creation Activity: Speaking
LT548 - Assessment Plan: Elementart Chinese for Heritage Speakers
LT548 - Curriculum and Materials Development
I designed this Assessment Plan in LT548 Curriculum and Materials Development, for an Elementary Chinese course that targets heritage learners. Sticking with the thought of making the assessments “fun and engaging” for younger learners, I outlined various informal, formative assessments using techniques such as collaborative activities, class discussions, homework, and cultural projects. This focus aims to reduce the potential pressure on the learners caused by formal examinations, monitor student learning, and provide continuous feedback throughout the learning process. One of the strengths of the assessment plan in this artifact is its alignment with the principle of authenticity (Bachman & Palmer, 1996). By incorporating real-world tasks like cultural hands-on projects and presentations, the plan ensures that assessments reflect genuine language use scenarios. This approach not only enhances the validity (Messick, 1989) of the assessments but also increases their relevance to students' real-life language needs. The part I liked most about the cultural projects is that it’s not only an assessment to evaluate learning progress, but also carries the interconnected nature of language and culture.
My journey in understanding assessment has been transformative. What began as a simple association with "tests" has evolved into a comprehensive grasp of various assessment types, principles, and their critical role in language teaching and learning. Looking ahead, I aim to apply this knowledge in my teaching practice and create diverse, effective assessments that not only measure student progress but also enhance their learning experience. I believe that my expanded understanding of assessment will enable me to make more informed decisions in curriculum design and implementation, benefiting my future students' language learning experiences.
Bachman, L. F. (1990). Fundamental considerations in language testing. Oxford University Press.
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests (Vol. 1). Oxford University Press.
Badia, G. (2019). Holistic or analytic rubrics? Grading information literacy instruction. College & Undergraduate Libraries, 26(2), 109-116.
Breiner-Sanders, K. E., Lowe, P., Miles, J., & Swender, E. (2000). ACTFL proficiency guidelines-Speaking (Revised 1999). Foreign Language Annals, 33(1), 13.
Brown, H. D., & Abeywickrama, P. (2019). Language assessment: Principles and classroom practices. Pearson.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp, 13-103). New York: Macmillan.
Mousavi, S. A. (2009). An encyclopedic dictionary of language testing (4th ed.). Tehran: Rahnama Publications
Vásquez, C., & Harvey, J. (2010). Raising teachers’ awareness about corrective feedback through research replication. Language Teaching Research, 14(4), 421-443.