Diego Zapata-Rivera

Distinguished Presidential Appointee

Educational Testing Service, Princeton NJ

Diego Zapata-Rivera is Distinguished Presidential Appointee in the Learning and Assessment Foundations and Innovations (LAFI) Center at Educational Testing Service in Princeton, NJ. He earned a Ph.D. in computer science (with a focus on artificial intelligence in education) from the University of Saskatchewan in 2003.

His research at ETS has focused on the areas of innovations in score reporting and technology-enhanced assessment including work on adaptive learning environments and game-based assessments. His research interests also include Bayesian student modeling, open student models, conversation-based tasks, virtual communities, authoring tools and program evaluation.

Dr. Zapata-Rivera has produced over 100 publications including journal articles, book chapters, and technical papers. He has served as a reviewer for several international conferences and journals. He has been a committee member and organizer of international conferences and workshops in his research areas.

He is a member of the Editorial Board of User Modeling and User-Adapted Interaction and an Associate Editor for AI for Human Learning and Behavior Change and a former Associate Editor of the IEEE Transactions on Learning Technologies Journal. Most recently, Dr. Zapata-Rivera has been invited to contribute his expertise to projects sponsored by the National Research Council, the National Science Foundation, NASA and the US Army Research Laboratory.



Recent book:

Score Reporting Research and Applications

The chapters in this volume provide a balance of research and practice in the field of score reporting. The first section includes foundational work on validity issues related the use and interpretation of test scores, design principles drawn from areas such as cognitive science, human-computer interaction and information visualization, and research on communicating assessment information to various audiences. The second section provides a select compilation of practical applications in real settings: large-scale assessment programs in K-12, credentialing and admissions tests in higher education, using reports to support formative assessment in K-12, applying learning analytics to provide teachers with class- and individual-level performance, and evaluating students’ interpretation of dashboard data. These chapters highlight the importance of clearly communicating assessment results to the intended audience to support appropriate decisions based on the original purposes of the assessment. As more technology-rich, highly interactive assessment systems become available, the more important it is to keep in mind that the information provided by these systems should support appropriate decision making by a variety of stakeholders. Many opportunities for research and development involving the participation of interdisciplinary groups of researchers and practitioners lie ahead in this exciting field.