Brigham Young University–Hawaii
Troy Cox
Brigham Young University
As artificial intelligence (AI) tools become more sophisticated and accessible, concerns about academic integrity in assessment continue to grow. There is no one-size-fits-all approach to develop strategies to mitigate these concerns. This panel will explore the evolving landscape of AI and cheating, bringing together perspectives from a testing industry expert, a higher education administrator, and a faculty expert on classroom-level challenges. Panelists will examine the impact of AI on assessments, institutional policies and responses, and practical strategies for maintaining academic integrity in the classroom.
The industry expert will address how AI challenges traditional assessment methods and how testing organizations are adapting to ensure the security of their standardized large-scale exams. The higher education administrator will discuss institutional strategies, including policy development, faculty support, and the role of technological solutions in mitigating AI-related cheating. Finally, a faculty expert will provide insight into classroom-level responses, exploring how AI is changing student behaviors and what instructors can do to foster ethical learning.
A discussant will moderate the conversation, ensuring a dynamic exchange of ideas among panelists and engaging the audience through structured Q&A. This session aims to provide attendees with actionable insights to uphold academic integrity in the AI era.
AI vs. Exams: Securing the Future of Standardized Testing
Abstract: As artificial intelligence rapidly transforms education and assessment, traditional testing methods face unprecedented challenges. In this short talk, I will explore emerging threats posed by AI to exam security, including automated test-taking and answer synthesis. I will also discuss how the industry is responding through enhanced security protocols, AI-driven proctoring, AI-generated item banks, examiner behavior analysis, and innovative assessment designs that prioritize critical thinking over rote memorization. By rethinking how we evaluate knowledge, we can uphold the validity of standardized testing in the AI age, and more importantly, align with the paradigm shift that AI is ushering in this new era.
Bio: Reza Neiriz holds a PhD in applied linguistics and technology with a focus on computer-mediated language assessment. He is a Machine Learning Engineer at MetaMetrics Inc. researching and developing AI-driven assessment solutions. His research focuses on automated assessment of performance tests, especially constructs like interactional competence in tests of oral communication.
AI, Cheating, and Validity—An Institutional Perspective
Abstract: As artificial intelligence (AI) reshapes education, concerns about academic integrity have intensified. While cheating remains a critical issue, it is ultimately a subset of a broader concern that Dawson et al. (2024) refer to as assessment validity—the degree to which assessment scores are interpreted and used accurately to reflect student learning. I will argue that institutional responses should focus less on policing AI-related cheating and more on ensuring that AI-era assessment decisions remain valid and meaningful.
Given this shift in focus, I will also discuss ongoing institutional efforts to address these challenges. We are currently exploring strategies to mitigate these concerns, including faculty training in assessment design, guidelines for students on ethical AI use, and policy frameworks that balance security with pedagogical soundness. Rather than reacting solely to misconduct, our goal is to investigate institutional approaches that promote academic integrity while adapting to the evolving role of AI in education.
By reframing the conversation around validity rather than just cheating, institutions can shift from reactive enforcement to a proactive, evidence-based approach that strengthens learning outcomes and academic integrity in the AI era.
Bio: Brent A. Green is the Associate Academic Vice President for Accreditation, Assessment, and Curriculum at Brigham Young University–Hawaii, where he also teaches courses in language testing and English syntax. He holds a Ph.D. in Applied Linguistics and Language Assessment from UCLA and has dedicated over thirty years to teaching, testing, mentoring, and administration in higher education. His work focuses on language assessment, academic accreditation, and educational policy, contributing to the advancement of both assessment practices and language pedagogy.
Classroom Assessment and Generative AI: Defining and Encouraging “Appropriate Use”
Abstract: As AI tools continue their spread into daily life and education, clear and effective guidance on the appropriate use of AI by teachers and students often lags behind. This has the potential to negatively impact several aspects of instruction at the classroom level, including assessment. This presentation discusses several necessary conditions for the effective and appropriate use of generative AI in classroom assessment: adequate AI literacy and assessment literacy, co-construction of policies on AI use, and rethinking assessment constructs in light of the current realities of generative AI.
Bio: Nicholas Swinehart is the Managing Director of Instructional Technology at the University of Chicago Language Center, where he supports instructors of over fifty languages with technology use and professional development. He is co-editor of the February, 2024 CALICO Journal special issue on social media for language learning and co-author of Teaching Languages in Blended Synchronous Classrooms: A Practical Guide (2020).
Teaching language testing requires balancing theoretical foundations, practical applications, and student engagement. While instructors develop creative course structures, assignments, and strategies, opportunities to exchange and refine these approaches remain limited. This session provides a syllabus exchange and show-and-tell discussion, creating a space for educators to share successful course designs, assignments, and assessment strategies.
Participants will be invited to upload their syllabi in advance, along with key assignments, rubrics, or instructional materials. These resources will be made available via a QR code for attendees to review and download. The session will follow a structured format, moving from brief presentations to small group discussions and concluding with a moderated exchange of insights.
Session Structure:
Syllabus Upload & Access – Participants submit their syllabi and supporting materials in advance, ensuring that attendees can review and download them during the session.
Brief Presentations (3-5 minutes each) – Each presenter shares:
A key feature of their syllabus (e.g., course structure, assessment design, innovative assignment).
How it enhances student engagement and learning.
Student feedback or observed learning outcomes.
Why it works and how others might adapt it.
Small Group Discussions – Attendees break into groups to:
Identify ideas from the presentations that they could integrate into their courses.
Discuss potential adaptations and modifications for their specific teaching contexts.
Share their own experiences with similar strategies.
Moderated Discussion – I will facilitate a whole-group discussion, synthesizing ideas from both the presentations and small groups, and guiding conversation around:
Common themes and best practices in language testing course design.
Challenges and solutions in assessment literacy instruction.
Additional insights and adaptations inspired by the session.
Potential Discussion Topics:
Structuring a language testing syllabus to balance theory, application, and practice.
Assignments that effectively teach reliability, validity, and fairness.
Engaging ways to introduce students to test development and analysis.
Strategies for assessing student learning in language testing courses.
Objectives:
To provide educators with concrete examples of effective language testing course designs.
To facilitate a syllabus exchange that enables instructors to refine and enhance their courses.
To encourage active reflection and discussion through small group engagement and a moderated exchange.
To ensure attendees leave with practical, downloadable resources and new teaching ideas to implement.
Relevance:
This session addresses the need for professional development in language testing education by fostering collaboration, resource sharing, and active discussion. The combination of syllabus exchange, small group engagement, and moderated discussion ensures that attendees not only gain valuable insights but also leave with actionable ideas to strengthen their courses.
Bio: Troy L. Cox, PhD, has worked at Brigham Young University since 1996 and currently serves as the Associate Director of Proficiency Services and Research at the Center for Language Studies and coordinator of the Language Sciences Laboratory. He holds a PhD in Instructional Psychology and Technology, with specialization in educational measurement and learning sciences with a focus on language learning. His research and publications focus on language proficiency, assessment, language acquisition, self-assessment, and objective measurement.