The 2025 plenary will be delivered at 10:30 am on Friday, September 26, 2025 in the Intercultural Center Auditorium
Sun-Young Shin is an Associate Professor in the Department of Second Language Studies at Indiana University. He earned his Ph.D. in Applied Linguistics from UCLA. His research focuses on authenticity in L2 listening assessment, L2 pragmatics assessment, and standard-setting methodologies. His extensive work has been published in numerous reputable journals and book chapters. He has been invited to deliver keynote speeches, lectures, and workshops on L2 assessment worldwide, including in Macau, Mexico, South Korea, Thailand, Vietnam, and the U.S. He serves on the editorial boards of Language Testing and Language Assessment Quarterly and co-chaired LTRC 2025 in Bangkok, Thailand.
As generative AI continues to reshape educational practices, language assessment professionals are navigating a landscape filled with both opportunities and challenges. While interest in AI is surging, we must be cautious not to pursue innovation for its own sake. The key question is not what AI can do, but how it can be meaningfully applied to enhance the validity, fairness, and interpretability of language assessments. In this talk, I argue for a shift in focus—from developing AI as a standalone solution to integrating it as a purposeful tool that supports, rather than replaces, fundamental assessment principles. AI-powered applications, such as chatbot-assisted assessments, show promise in providing immediate feedback and learner-specific diagnostics (Xi, 2025). These advancements offer potential for improving formative assessment and informing the use of test scores. However, concerns remain about the fairness and transparency of AI-generated outputs, especially when assessing diverse learner populations. Over-reliance on AI may introduce construct-irrelevant variance or obscure the very skills we aim to measure. Rather than using AI as a crutch, we should harness its capabilities to enhance score interpretation, diagnostic insight, and decision-making. This talk proposes aligning AI integration with core assessment purposes and ensuring that innovation is guided by principles of validity, not convenience. As we weather the storm of rapid technological change in language assessment, our responsibility is to ground innovation in sound theory and practical benefit for language learners, educators, and stakeholders alike.