e-Revise: An automated writing evaluation system to support text-based argumentation and revision
Using Natural Language Processing for Scoring Writing and Providing Feedback At-Scale
Our project studies the use of Natural Language Processing (NLP) techniques to score students' argument writing and provide automated feedback to students to strengthen their essays. We have recently expanded our project to focus on automated assessment of students' revision efforts with feedback to develop their revision skills. Our work has been funded by the National Science Foundations, the Department of Education's Institute of Education Sciences, and the Learning Research & Development Center at University of Pittsburgh. A summary of some of our research contributions is provided in the research brief published by the RAND Corporation. Further detail about our research is located below in the Publications section.
People
We are an interdisciplinary team of researchers from computer science and education, with expertise in natural language processing, literacy instruction, quantitative and qualitative methods and policy.
Principal Investigators
Students
Student and Staff Alumni
Publications
Our publications span investigations of the technical quality of our scores, fairness evaluations, connections of our system to learning theory, and validity investigations evaluating the relationship of our scores to multiple measures of classroom instruction.
2023
Zhexiong Liu, Diane Litman, Elaine Wang, Lindsay Matsumura, and Richard Correnti. Predicting the Quality of Revisions in Argumentative Writing. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 275–287, Toronto, Canada, 2023.
2022
Matsumura, L.C., Wang, E.L., Correnti, R., & Litman, D. (2023). Tasks and feedback: An exploration of students’ opportunity to develop adaptive expertise for analytic text-based writing. Assessing Writing, 55. doi: https://doi.org/10.1016/j.asw.2022.100689
Correnti, R., Matsumura, L.C., Wang, E.L., Litman, D., & Zhang, H. (2022). Building a validity argument for an automated writing evaluation system (eRevise) as a formative assessment. Computers & Education Open. doi: https://doi.org/10.1016/j.caeo.2022.100084
Wang, E.L., Matsumura, L.C., Litman, D., & Correnti, R. (2022). Contributions to research on automated writing scoring and feedback systems (RB-A1062-1). Santa Monica, CA: RAND Corporation. doi: https://doi.org/10.7249/RBA1062-1
2021
Diane Litman, Haoran Zhang, Richard Correnti, Lindsay Matsumura and Elaine Wang, A Fairness Evaluation of Automated Methods for Scoring Text Evidence Usage in Writing, Proceedings of 22nd International Conference on Artificial Intelligence in Education (AIED), June, 2021.
Haoran Zhang and Diane Litman, Essay Quality Signals as Weak Supervision for Source-based Essay Scoring, 16th Workshop on Innovative Use of NLP for Building Educational Applications (BEA), pp. 85-96, April, 2021.
2020
Tazin Afrin, Elaine Lin Wang, Diane Litman, Lindsay Clare Matsumara, and Richard Correnti, Annotation and Classification of Evidence and Reasoning Revisions in Argumentative Writing, Proceedings 15th Workshop on Innovative Use of NLP for Building Educational Applications, July, 2020.
Correnti, R., Matsumura, L.C., & Wang, E.L., Litman, D., Rahimi, Z., & Kisa, Z. "Automated scoring of students’ use of text evidence in writing". Reading Research Quarterly, 55(3), 2020.
Elaine Lin Wang, Lindsay Clare Matsumura, Richard Correnti, Diane Litman, Haoran Zhang, Emily Howe, Ahmed Magooda, Rafael Quintana, eRevis(ing): Students' Revision of Text Evidence Use in an Automated Writing Evaluation System, Assessing Writing 44, 2020.
Haoran Zhang and Diane Litman, Automated Topical Component Extraction Using Neural Network Attention Scores from Source-based Essay Scoring, Proceedings of the 58th Annual meeting of the Association for Computational Linguistics (ACL), July, 2020.
2019
Zhang, H., Magooda, A., Litman, D., Correnti, R., Wang, E., Matsmura, L. C., ... & Quintana, R. "eRevise: Using Natural Language Processing to Provide Formative Feedback on Text Evidence Usage in Student Writing". Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, pp. 9619-9625), at IAAI Honolulu, HI, February, 2019.
2018
Haoran Zhang and Diane Litman, "Co-Attention Based Neural Network for Source-Dependent Essay Scoring," Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, at NAACL New Orleans, LA, June, 2018.
2017
Haoran Zhang and Diane Litman, "Word Embedding for Response-To-Text Assessment of Evidence", Proceedings Student Research Workshop of the Annual Meeting of the Association for Computational Linguistics, pp. 75-81, Vancouver, Canada, July, 2017.
Rahimi, Z., Litman, D., Correnti, R., Wang, E. and Matsumura, L.C., "Assessing students’ use of evidence and organization in response-to-text writing: Using natural language processing for rubric-based automated scoring". International Journal of Artificial Intelligence in Education, 27(4), pp.694-728. 2017
2016
Zahra Rahimi and Diane Litman, "Automatically Extracting Topical Components for a Response-to-Text Writing Assessment", Proceedings 11th Workshop on Innovative Use of NLP for Building Educational Applications (NAACL Workshop), pp. 277-282, San Diego, CA, June, 2016. (short paper)
2015
Zahra Rahimi, Diane Litman, Elaine Wang and Richard Correnti, "Incorporating Coherence of Topics as a Criterion in Automatic Response-to-Text Assessment of the Organization of Writing", Proceedings 10th Workshop on Innovative Use of NLP for Building Educational Applications (NAACL Workshop), pp 20-30, Denver, Colorado, June, 2015.
2014
Zahra Rahimi, Diane Litman, Richard Correnti, Lindsay Clare Matsumura, Elaine Wang and Zahid Kisa, "Automatic Scoring of an Analytical Response-To-Text Assessment", Proceedings 12th International Conference on Intelligent Tutoring Systems (ITS), pp. 601-610, Honolulu, HI, June, 2014.
Correnti, R., Matsumura, L. C., Hamilton, L., & Wang, E. "Assessing students' skills at writing analytically in response to texts", The Elementary School Journal, 114(2), pp.142-177. 2014.
Screenshots of eRevise(+RF) Systems
The Architecture of eRevise+RF (New)
The eRevise+RF system is built on the prior eRevise system. It focuses on analyzing revisions in text-based argument essays. After students submit their first and second drafts, we score the quality of students' revisions with eRevise+RF's AES system and provide formative feedback on their use of text evidence. Students could improve their third draft based on the automated feedback messages.
The architecture of eRevise (Prior System)
After students submit their first drafts, eRevise's AES component extracts features representing the quality of text-based evidence usage in terms of constructs in the RTA Evidence rubric. Some of these features are then passed as input to the AWE system's feedback selection algorithm, which will in turn output a subset of predefined feedback messages that are believed to best address the problems of the first draft based on the features.
The first phase of eRevise
Students write their first drafts via Qualtrics Survey system.
The second phase of eRevise
Students revise their drafts via eRevise system with helpful feedback selected by the automatic system.
Software & Code
Leveraging ChatGPT to Predict Revision Quality
We study the relationship between Argument Contexts (ACs) and Argument Revisions (ARs) in argumentative writing. We use Chain-of-Thought prompts to facilitate ChatGPT to generate ACs for identifying successful vs. unsuccessful ARs. We show ChatGPT does help predict revision quality.
Co-Attention Based Source-Dependent Essay Grading
We use a co-attention mechanism to help the model learn the importance of each part of the essay more accurately. Also, this paper shows that the co-attention based neural network model provides reliable score prediction of source-dependent responses.