e-Revise: An automated writing evaluation system to support text-based argumentation and revision

Using Natural Language Processing for Scoring Writing and Providing Feedback At-Scale

Our project studies the use of Natural Language Processing (NLP) techniques to score students' argument writing and provide automated feedback to students to strengthen their essays. We have recently expanded our project to focus on automated assessment of students' revision efforts with feedback to develop their revision skills. Our work has been funded by the National Science Foundations, the Department of Education's Institute of Education Sciences, and the Learning Research & Development Center at University of Pittsburgh. A summary of some of our research contributions is provided in the research brief published by the RAND Corporation. Further detail about our research is located below in the Publications section.

People

We are an interdisciplinary team of researchers from computer science and education, with expertise in natural language processing, literacy instruction, quantitative and qualitative methods and policy.

Principal Investigators

Students

Student and Staff Alumni

Publications

Our publications span investigations of the technical quality of our scores, fairness evaluations, connections of our system to learning theory, and validity investigations evaluating the relationship of our scores to multiple measures of classroom instruction.

2023

2022

2021

2020


2019

2018

2017

2016

2015

2014

Screenshots of eRevise(+RF) Systems

The Architecture of eRevise+RF (New)

The eRevise+RF system is built on the prior eRevise system. It focuses on analyzing revisions in text-based argument essays. After students submit their first and second drafts, we score the quality of students' revisions with eRevise+RF's AES system and provide formative feedback on their use of text evidence. Students could improve their third draft based on the automated feedback messages.

The architecture of eRevise (Prior System)

After students submit their first drafts, eRevise's AES component extracts features representing the quality of text-based evidence usage in terms of constructs in the RTA Evidence rubric. Some of these features are then passed as input to the AWE system's feedback selection algorithm, which will in turn output a subset of predefined feedback messages that are believed to best address the problems of the first draft based on the features.

The first phase of eRevise

Students write their first drafts via Qualtrics Survey system.

The second phase of eRevise

Students revise their drafts via eRevise system with helpful feedback selected by the automatic system.

Software & Code

Leveraging ChatGPT to Predict Revision Quality

We study the relationship between Argument Contexts (ACs) and Argument Revisions (ARs) in argumentative writing. We use Chain-of-Thought prompts to facilitate ChatGPT to generate ACs for identifying successful vs. unsuccessful ARs. We show ChatGPT does help predict revision quality.

Co-Attention Based Source-Dependent Essay Grading

We use a co-attention mechanism to help the model learn the importance of each part of the essay more accurately. Also, this paper shows that the co-attention based neural network model provides reliable score prediction of source-dependent responses. 

Sponsors