California State University, Fullerton
As the use of generative AI becomes more and more common in developing tests and instructional materials, it is important that we not lose sight of the need to specify clearly the nature of the passages to be generated.
This workshop will discuss options in setting passage specifications for both testing and instructional use, as well as how semi-scripted passages can provide more authentic conversational listening material, and how generative AI can be used to create passages following those specifications. During the workshop, you will become familiar with options for setting passage specifications, particularly
using construct components to help plan passages;
vocabulary analysis, including the advantages and disadvantages of various lists;
measures and proxies for syntactic complexity;
other estimates of text difficulty;
semi-scripted listening passages and how to plan them;
capabilities of various LLMs for following passage specifications, particularly for vocabulary control; and
verifying that the output of LLMs matches the specifications.
Participants should bring their laptops in order to participate most effectively.
Facilitator Bio
Nathan Carr is Chair of the Department of Modern Languages and Literatures and a Professor of TESOL at California State University, Fullerton. He earned his Ph.D. in applied linguistics at UCLA, with a specialization in second language assessment. He was a Fulbright Scholar in Kazakhstan in 2017, and has done U.S. State Department English Language Specialist projects in Vietnam, Indonesia, and Azerbaijan. His research interests are eclectic, but focus on test development and validation, automated scoring for short-answer questions, curriculum development, and teacher training, particularly in task and materials development.
This workshop will introduce participants to the principles of proficiency-oriented, performance-based summative assessments and provide them with hands-on opportunities to design test tasks for listening and reading comprehension. During the workshop, you will:
gain an understanding of the concept of proficiency in teaching and testing
become familiar with the American Council on the Teaching of Foreign Languages’ (ACTFL) Proficiency Guidelines, including descriptions of the main levels and sublevels and of the criteria that describe real-world language functional abilities
apply proficiency principles and criteria to the design of test tasks for reading and listening comprehension
participate in hands-on task design
receive input and feedback, and learn how to critically analyze proficiency test tasks for classroom use
understand how summative proficiency assessments impact curriculum design and instructional choices
Limited to 42 participants.
Facilitator Bios
Catherine Baumann is a Senior Instructional Professor and Director of the University of Chicago Language Center (CLC) where she works to reframe language instruction as a 21st century skill. She received her Ph.D. in Second Languages and Cultures Education at the University of Minnesota, specializing in reading comprehension and language testing. She oversees all programs in the CLC and consults for language programs in higher education on a variety of curricular and assessment-related issues.
Ahmet Dursun is the Executive Director of the University of Chicago Office of Language Assessment. He is responsible for establishing and maintaining the Office of Language Assessment and the University as innovative leaders in language learning and assessment, both in terms of the design and development of new learning and assessment strategies, systems, and instruments and in the research that brings them to the fruition and ensures their validity. His research has explored computer-assisted language instruction, language testing research and practice, test validation, AI-supported automated scoring of constructed-response assessment tasks, and language-for-specific-purposes domain analysis.
Phuong Nguyen is a language assessment specialist at the University of Chicago Office of Language Assessment where she provides language assessment literacy training to language instructors, develops new language tests, manages two assessment programs, and conducts test validation and evaluation research. She earned a Ph.D. in Applied Linguistics and Technology with a Statistics minor from Iowa State University. Her research interests include language assessment, technology-mediated language learning, corpus linguistics, and program evaluation.