Diego Zapata-Rivera

ETS, Princeton NJ

Diego Zapata-Rivera is Distinguished Presidential Appointee at Educational Testing Service in Princeton, NJ. He earned a Ph.D. in computer science (with a focus on artificial intelligence in education) from the University of Saskatchewan in 2003. 

His research at ETS has focused on the areas of innovations in score reporting and technology-enhanced assessment including work on adaptive learning and assessment environments, conversation-based assessment, caring assessment, and game-based assessment. His research interests also include Bayesian student modeling, open student models, conversation-based tasks, virtual environments, authoring tools and program evaluation. 

Dr. Zapata-Rivera has produced over 150 publications including edited volumes, journal articles, book chapters, and technical papers. He has served as a reviewer for several international conferences and journals. He has been a committee member and organizer of international conferences and workshops in his research areas. He is a Co-PI and research co-director of the INVITE AI Institute (invite.illinois.edu). 

Dr. Zapata-Rivera is a member of the International AI in Education Society Executive Committee (2022-2027), an IEEE Education Society Distinguished Lecturer (2024-2025), a member of the Editorial Board of User Modeling and User-Adapted Interaction, an Associate Editor for IJAIED, AI for Human Learning and Behavior Change, and a former Associate Editor of the IEEE Transactions on Learning Technologies Journal. Dr. Zapata-Rivera has been invited to contribute his expertise to projects sponsored by the National Research Council, the National Science Foundation, NASA and the US Army Research Laboratory. 



Recent book:

Score Reporting Research and Applications 

The chapters in this volume provide a balance of research and practice in the field of score reporting. The first section includes foundational work on validity issues related the use and interpretation of test scores, design principles drawn from areas such as cognitive science, human-computer interaction and information visualization, and research on communicating assessment information to various audiences. The second section provides a select compilation of practical applications in real settings: large-scale assessment programs in K-12, credentialing and admissions tests in higher education, using reports to support formative assessment in K-12, applying learning analytics to provide teachers with class- and individual-level performance, and evaluating students’ interpretation of dashboard data. These chapters highlight the importance of clearly communicating assessment results to the intended audience to support appropriate decisions based on the original purposes of the assessment. As more technology-rich, highly interactive assessment systems become available, the more important it is to keep in mind that the information provided by these systems should support appropriate decision making by a variety of stakeholders. Many opportunities for research and development involving the participation of interdisciplinary groups of researchers and practitioners lie ahead in this exciting field.