SustaiNLP 2022

Third Workshop on Simple and Efficient Natural Language Processing


Workshop Description

The Natural Language Processing (NLP) community has, in recent years, demonstrated a notable focus on improving higher scores on standard benchmarks and taking the lead on community-wide leaderboards (e.g., GLUE, SentEval). While this aspiration has led to improvements in benchmark performance of (predominantly neural) models, it has also came at a cost, i.e., increased model complexity and the ever-growing amount of computational resources required for training and using the current state-of-the-art models. Moreover, the recent research efforts have, for the most part, failed to identify sources of empirical gains in models, often failing to empirically justify the model complexity beyond benchmark performance

Because of these easily observable trends, we have proposed the SustaiNLP workshop with the goal of promoting more sustainable NLP research and practices, with two main objectives: (1) encouraging development of more efficient NLP models; and (2) providing simpler architectures and empirical justification of model complexity. For both aspects, we will encourage submissions from all topical areas of NLP.


Concerning efficiency, we encourage submissions covering models that yield competitive performance but are more efficient than existing models in any of the following aspects:

  • Data and training efficiency: models requiring less training data and/or less computational resources and/or time;

  • Inference efficiency: models with lower computational complexity of prediction/inference.

With respect to justifiability of model complexity, we encourage submissions that:

  • Justify the complexity of existing or newly proposed NLP models, e.g., by showing that meaningful simplifications of the model lead to significant deterioration in performances, interpretability, and/or robustness, larger LM models converge faster and are more compressible, or increasing the amount of data while using a smaller model is not helpful eventually.

  • Introduce a conceptual or practical simplification of an existing model, yielding (1) comparable performance, while (2) offering advantages like interpretablity, inference time, robustness, etc.


The workshop will encourage novel ways of evaluating and reporting research besides the currently prevalent focus on comparison (using established metrics) with state-of-the-art models on known benchmarks. Concretely, we aim to (1) promote best practices in reporting experimental results, (2) encourage work that critically analyzes existing evaluation protocols, and (3) encourage the development and usage of novel evaluation procedures.

With the SustaiNLP workshop, we wish to complement existing related events on reproducibility and interpretability (e.g., 4REAL@LREC18, BlackboxNLP) and further encourage the community to justify the complexity of models and design simpler solutions yielding competitive results. Furthermore, our focus on efficiency and justifiability has the potential to stimulate conceptual creativity and novelty in model design, as opposed to current trends where the empirical progress is dominantly achieved by increasing model complexity, computational resources, and training data.

Link to the first and second SustaiNLP workshops can be found here - https://sites.google.com/view/sustainlp2020 https://sites.google.com/view/sustainlp2021