SERAC: Memory-based Model Editing at Scale

paper | code | interactive demo

A computationally efficient approach to applying multiple edits to the knowledge or behaviors of pre-trained language models

Abstract

Even the largest neural networks make errors, and once-correct predictions can become invalid as the world changes. Model editors make local updates to the behavior of base (pre-trained) models to inject updated knowledge or correct undesirable behaviors. Existing model editors have shown promise, but also suffer from insufficient expressiveness: they struggle to accurately model an edit's intended scope (examples affected by the edit), leading to inaccurate predictions for test inputs loosely related to the edit, and they often fail altogether after many edits. As a higher-capacity alternative, we propose Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model (SERAC), which stores edits in an explicit memory and learns to reason over them to modulate the base model's predictions as needed. SERAC not only addresses both of the challenges above, but also produces an editor that can be reused across multiple base models. To enable more rigorous evaluation of model editors, we introduce three challenging language model editing problems based on question answering, fact-checking, and dialogue generation, finding that only SERAC achieves high performance on all three problems.

If this paper or code was helpful for your research, please use the following citation:

@inproceedings{mitchell2022memory,

title={Memory-Based Model Editing at Scale},

author={Mitchell, Eric and Lin, Charles and Bosselut, Antoine and Finn, Chelsea and Manning, Christopher D.},

booktitle={International Conference on Machine Learning},

url={https://arxiv.org/pdf/2206.06520.pdf},

year={2022},

}