University of Pennsylvania
Iowa State University
Iowa State University
Antony John Kunnan
University of Macau
Columbia University
Title: Child-centered assessment: concepts, research and practice
Abstract: As a growing number of children are learning an additional language (AL) in instructional settings (referred to as young language learners, or YLLs, defined as AL-learning children in pre-primary and primary school), integrating assessment into pedagogy to support their learning has become a pressing issue for many educators (Prošić-Santovac & Rixon, 2019). Do traditional test-oriented approaches truly help YLLs learn an AL more effectively? Do they fully understand the purpose of assessments? Do we, as educators of YLLs, know how they feel about assessments and what they hope to gain from them? What effort have we made to make assessments engaging for YLLs? Do we simply assume that taking assessments is inherently anxiety-inducing? Have we discussed assessment criteria with YLLs? Have we collaborated with YLLs in designing assessments? To address these questions, I propose incorporating child-centered approaches to assessment for YLLs in this talk.
Child-centeredness is a multifaceted concept with various interpretations. In this presentation, I begin by addressing major ethical and conceptual issues related to child-centeredness in assessment: (1) ensuring assessments are developmentally appropriate for YLLs; (2) respecting their voices and agency; (3) fostering their autonomy, motivation, and learning; and (4) contextualizing assessments within their daily lives and instructional practices. I then explore each issue in greater depth, drawing on examples from previous studies, including my own. These examples include critical cognitive, social, and affective developmental factors in designing and implementing assessments for YLLs, assessment literacy for YLLs, self-assessment, and the use of digital technology in both assessment and instruction. Practical suggestions are also provided. While the talk focuses on YLLs, many of the issues discussed are also relevant to learners of all ages.
Title: Assessing language proficiency around the world
Abstract: In the last 75 years, the field of language assessment has benefited from research conducted on language assessments in English and a few European (French, Spanish, and German) and Asian languages (Japanese, Korean, and Mandarin). At the same time, there has been disregard of language assessments practices around the world that could inform our theories, practices, and policies. In this talk, we editors propose to share plans and challenges for a new volume titled Assessing Language Proficiency around the World (Wiley, 2026) in which 50 prominent and lesser-known languages will be discussed. This collection will be a major update to Volume 4 of The Companion to Language Assessment (Wiley, 2014), with an effort to provide a fair representation of world languages and a renewed focus on global inclusivity.
There are many challenges in producing this volume. The first is one of contextual diversity. As is obvious, languages are situated in contexts that are varied due to historical, social, political, and economic reasons. For example, there are official monolingual contexts (Australia, Austria, Iceland, and Portugal), bilingual contexts (Canada, Philippines, and Kenya), trilingual contexts (Belgium and Luxembourg) and multilingual contexts (Singapore, Switzerland, South Africa, and India). Within these contexts, there are also many indigenous and vulnerable or endangered languages as well. So, how to position different languages within different contexts would have important implications on the discussions of assessment practices.
The second challenge is working out the structure of the chapters. Most chapters will have the following structure: Overview of language (geographical spread, language status; home/public use), general linguistic features (lexico-grammatical systems; discourse patterns, etc.), language in context (public use of the language; mono-/multi- lingual society), teaching and learning systems (schools, colleges), assessment systems (educational, professional, and governmental policy areas), research on assessments (validation, fairness, etc.), and new or future initiatives. Sime chapters, however, will have a modified structure based on their unique individual contexts.
The third challenge is to present the chapters in appropriate groups. In the 2014 edition, chapters were grouped by continents rather than by language families. In this collection, chapters will be grouped by language families that reflect genetic relationships between languages. This would be valuable for understanding linguistic features and assessment needs. For example, Indo-European languages share grammatical structures that may influence assessment design and tasks (example, cloze tests). Thus, chapters will be presented in six major language family groups: Indo-European, Sino-Tibetan, Niger-Congo, Austronesian Afro-Asiatic, and Dravidian. Within these major language families, chapters will be grouped by sub-families (for example, the Indo-European language family will have sub-groups such as Celtic, Germanic, Indo-Aryan, Balto-Slavic, etc.).
Overall, it is expected that knowledge of policies and practices of language assessments in their respective contexts will prompt the field to theorize and implement language assessments from context-dependent and language-specific perspectives. Assisting us in this endeavor are associate editors Ahmet Dursun, Lynda Taylor, and Nguyen Thi Ngoc Quynh, and editorial assistant Coral Yiwei Qin.
Title: Validity Evidence to Support the Use of a Collaborative Problem-Solving Scenario to Measure Situated Foreign Language Proficiency: A Case Study
To further this agenda, several SFL testers have turned to scenario-based language assessment (SBLA) as a means of measuring situated SFL proficiency in the context of CPS (e.g., Banerjee, 2018; Beltrán-Zuñiga, 2024; Joo et al. 2023; Purpura, 2021, 2022; Purpura & Banerjee., 2021; Seong, 2024; Shore et al. 2017). This approach holds significant promise in terms of its potential to measure the ability to use a SFL to develop and display understandings related to the achievement of a highly contextualized scenario goal. It also comes, however, with challenges related, for example, to conceptualizing and implementing task design features that mirror real-life problem solving, and challenges related to technology.
Against this backdrop, the current “proof-of-concept” study investigated the effectiveness of using a CPS scenario to measure the examinees’ ability to use a SFL to develop topical understandings within the SBLA, so these understandings could be used to present a compelling, evidence-based pitch to a committee of judges. This study first examined claims to support the psychometric functionality of the SBLA measures as well as claims relating to topical learning within the scenario. Finally, the study used a learning-oriented language assessment framework (Purpura & Turner, 2018) to examine claims about examinees’ perceptions of and reactions to this new assessment technique.