Which is more important: reaching the summit or climbing the mountain?
Assessments are forms of determining the learners’ linguistic ability; nevertheless, they go beyond mere measurement. I deliberately decided to avoid terms such as “measurements” and “evaluations” in reference to assessments, for my strong belief that they carry subtle nuances that are insufficiently comprehensive. Instead, I rely on Dietel et al.’s (1991) definition of assessments “as any method used to better understand the current knowledge a student possesses.” Understanding students’ current knowledge and language abilities is crucial, akin to checkpoints along a hiking trail that inform us of a hiker's progress and potential need for adjustments. Similarly, ongoing assessments allow educators to tailor instruction and guide students towards their learning goals.
It must be noted that dichotomies are present in language assessments (i.e., summative vs. formative, formal vs. informal, norm-referenced vs. criterion-referenced). However, dynamic approaches to language learning, teaching, and assessments invite us to investigate these dichotomies instead of eliminating them. In designing assessments, educators must clarify their purposes, define their assessment constructs, design their tasks, and reflect on the benefits and drawbacks of assessment principles (i.e., practicality, reliability, validity, washback, and authenticity) (Brown & Abeywickrama, 2019).
My LT 549 artifact—Reading Assessment—is designed to assess the students’ macro-reading skills. It is a formative assessment, referred to as “assessments for learning,” since it enhances student learning through constructive feedback and aids the teacher in designing and evaluating the curriculum, instructions, and areas of intervention (Black et al., 2003). In this light, the assessment constructs aim to assess the students’ ability to interpret implied meaning and intended effect, make inferences, explain how the writer’s choice of words and structure (including punctuation) convey meaning, and analyze the use and effect of literary techniques (e.g., metaphors, similes, symbolism) and different viewpoints (first and third person) in the text. It is noted that the constructs are on the analytical and higher-order thinking level, which matches the CEFR proficiency level for proficient users at the C1 level who can understand a wide range of demanding longer texts and recognize implicit meaning. The assessment is composed of an extended production task that incorporates a visual prompt as an attempt to differentiate content in the field of assessment. Although it was challenging to implement product differentiation to ensure the reliability of the assessment, I explored various differentiation strategies to cater to various students’ needs. In the content, I incorporated leveled texts, visual support, and carefully tailored open-ended questions to appeal to different learner needs. The extended production task allows for more creative construction, eliciting authentic language use. Furthermore, the assessment includes a self-designed analytical rubric, enabling learners to identify weaknesses and capitalize on strengths. Lastly, a section discussing the benefits and drawbacks of the assessments with regard to assessment principles is included to guide language teachers in their reflective process on assessment design, analysis, and evaluation.
LT 549 Artifact: Reading Assessment
LT 549 Artifact: Listening Assessment
My second LT 549 artifact—Listening Assessment— demonstrates another attempt at reform in the field of assessment. It features a less present accent in listening assessments: Nigerian Accent. While designing this assessment, I consciously decided to break the norm of presenting one “standardized” accent for learners to comprehend and analyze as the main listening material. Instead, I modified the assessment approach to challenge stereotypes and dismantle model minority structures as a valuable approach to education that changes students and society (Kumashiro, 2000).
The listening assessment stands as a formal formative assessment that is directed to assess the students’ ability to identify the central idea(s) and specific details. Furthermore, the task assesses the inferential ability of students to identify implied meanings and draw conclusions. It must be noted that the main summative assessment in this course—Cambridge International Assessment—does not incorporate listening and speaking skills for practicality concerns. Therefore, these skills are often absent or neglected in the prescribed curriculum design. However, my attempt to design this assessment is a continuous and cyclical one that is used to collect information and evidence about the student's learning. The task design features multiple-choice questions, which promotes a standardized format for scoring, minimizing the risk of subjectivity in scoring or violation of inter-rater reliability. However, such task design requires a thorough and thoughtful design of distractors to ensure that they are well-distracting and embedded purposefully. Lastly, the artifact includes a section guiding language educators in their process of designing a similar task. It highlights important task design features, including genre, prompt attributes, rate of speech, accent, Lexile level, lexical coverage, and length of listening track.
Differentiated Assessments: Assessment as Learning
Language educators in institutional contexts, including myself, often face the challenge of following passed-down assessment practices that are based on competency-based syllabuses. Graves (2000) notes that “competency-based syllabuses are particularly popular in contexts where the sponsor or funder wants to see measurable results” (p. 46). Such standardized assessments are categorized as summative assessments and referred to as “assessments of learning,” with the primary goal of attesting to students’ achievements against a prescribed set of standards. Despite their evaluative role, these assessments have been used for gatekeeping purposes, not only for the students but also for the teacher. As a teacher of English in an Arab context for seven years, I have been struggling with the idea of “teaching for the test,” since international, standardized assessments were the only “reputable” mode of testing students’ linguistic abilities and attesting to teachers’ efficacy. Most of these assessments tend to include artificial, mechanical, and contrived tasks that do not mirror communicative language use. They test students’ language abilities, reaching a “failure point,” which does not provide a clear reflection of the learner’s’ linguistic competence or performance ability. Thus, my students and I were challenged by assessment practices that did not address our contexts or our needs. My experience has left me with one question: Does an Arab English language test need to follow the same approach to validity as an American test?
The truth is that the “one-size-fits-all” design of assessment does not cater to anyone’s needs. Whereas differentiated instructions celebrate a wide range of acknowledgment in SLA literature on differentiation, assessment—a major aspect of learning and teaching—is largely absent from the discussion or rather ignored. Believing in the positive impact of differentiated assessments, my LT 548 artifact depicts a complete differentiated assessment plan for a course design targeting students at the lower secondary level in Egypt. The plan entails various types of assessments that address students’ needs and skills. While designing this plan, two questions occupied my mind: If the outcomes of differentiated instructions are positive, why don’t practitioners consider applying them to assessment? Is it safe to believe that students’ progress follows the same trajectory?
Freeman (2007) argues that even with the same language, the same curriculum, and the same syllabus, learners choose and draw their own developmental trajectories. In this light, the assessment plan includes various types of assessments that are differentiated by product. They range from tests, debates, peer and self-assessments, oral presentations, and dynamic assessments. The artifact integrates multiple dichotomies of assessments, including formal and informal assessments, summative and formative assessments, norm-referenced and criterion-referenced assessments, and integrative assessments. Furthermore, the plan details opportunities for the active participation of students in setting learning goals and success criteria (Fletcher & Shaw, 2012). Reflecting on my teaching experience, the use of exemplars (e.g., students’ work from previous and/or current cohorts) can be invaluable as a form of assessment, allowing students to elicit success criteria, engage in self-assessment, and evaluate their peers’ work. Through this technique, learners are guided towards understanding the idea of success and improvement in complex learning environment (Sadler, 2010), enabling them to become active agents in the process of devising and evaluating assessments.
LT 548 Artifact: Assessment Plan
References
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning. McGraw-Hill Education.
Brown, H. D. & Abeywickrama, P. (2019). Language assessment: Principles and classroom practices. Pearson.
Dietel, R. J., Herman, J. L., & Knuth, R. A. (1991). What Does Research Say About Assessment? [Online] Retrieved from http://methodenpool.uni-koeln.de/portfolio/What%20Does%20Research%20Say%20About%20Assessment.htm
Fletcher, A., & Shaw, G. (2012). How does student-directed assessment affect learning? Using assessment as a learning process. International Journal of Multiple Research Approaches, 6(3), 245–263. https://doi.org/10.5172/mra.2012.6.3.245
Graves, K. (2000). Designing Language Courses: A Guide for Teachers[PP9] . United Kingdom: Heinle & Heinle.
Kumashiro, K. K. (2000). Toward a theory of anti-oppressive education. Review of educational research , 70(1), 25–53. Https://doi.Org/10.2307/1170593
Larsen-Freeman, D. (2007). On the complementarity of Chaos/complexity theory and dynamic systems theory in understanding the second language acquisition process. Bilingualism: Language and Cognition, 10(01), 35. https://doi.org/10.1017/s136672890600277x
Sadler, D. R. (2010). Beyond feedback: Developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550. https://doi.org/10.1080/02602930903541015
Image Attributions
Cover Image by AXP Photography
Personal Considerations
The cover image portrays the Memorial Temple of Hatshepsut, Luxor, Egypt. Similar to comprehensive assessments, the temple presents careful planning, construction, and crafting. Additionally, the temple's enduring presence and attractive impact could be compared to a well-designed assessment that is practical and reliable.