InterNLP 2021

First Workshop on Interactive Learning for Natural Language Processing

Workshop at ACL 2021

Virtual Conference, August 5th, 2021

Proceedings (ACL Anthology)

Contact: internlp2021@googlegroups.com

We thank all invited speakers, panelists, presenters, and participants
for their participation and active engagement in the workshop!

The recordings of the invited talks are available on YouTube. If access is restricted in your country, please contact us at internlp2021@googlegroups.com

Motivation

A key aspect of human learning is the ability to learn continuously from various sources of feedback. In contrast, much of the recent success of deep learning for NLP relies on large datasets and extensive compute resources to train and fine-tune models, which then remain fixed. This leaves a research gap for systems that adapt to the changing needs of individual users or allow users to continually correct errors as they emerge. Learning from user interaction is crucial for tasks that require a high grade of personalization and for rapidly changing or complex, multi-step tasks where collecting and annotating large datasets is not feasible, but an informed user can provide guidance.

What is interactive NLP?

Interactive Learning for NLP means training, fine-tuning or otherwise adapting an NLP model to inputs from a human user or teacher. Relevant approaches range from active learning with a human in the loop, to training with implicit user feedback (e.g. clicks), dialogue systems that adapt to user utterances, and training with new forms of human input. Interactive learning is the converse of learning from datasets collected offline with no human input during the training process.

Goals

The goal of this workshop is to bring together researchers to:

  • Develop novel methods for interactive machine learning of NLP models.

  • Discuss how to evaluate interactive NLP systems, including models for realistic user simulation.

  • Identify scenarios involving natural language where interactive learning is beneficial.

Previous work has been split across different tracks and task-focused workshops, making it hard to disentangle applications from broadly-applicable methodologies or establish common practices for evaluating interactive learning systems. We aim to bring together researchers to share insights on interactive learning from a wide range of NLP-related fields, including, but not limited to, dialogue systems, question answering, summarization, and educational applications.

Concerning methodology, we encourage submissions investigating various dimensions of interactive learning, such as (but not restricted to):

  • Interactive machine learning methods: the wide range of topics discussed above, from active learning with a user to methods that extract, interpret and aggregate user feedback or preferences from complex interactions, such as natural language instructions.

  • User effort: the amount of user effort required for different types of feedback; explicit labels require higher user effort than feedback deduced from user interaction (e.g., clicks, viewtime); how users cope with the system misinterpreting instructions.

  • Feedback types: different types of feedback require different techniques to incorporate them into a model. E.g., explicit labels allow us to directly train while user instructions require interpretation.

A major bottleneck for interactive learning approaches is their evaluation, including a lack of suitable datasets. We therefore encourage submissions that cover research into the following:

  • Evaluation methods: approaches to assessing interactive methods, such as low-effort, easily reproducible approaches with real-world users and simulated user models for automated evaluation.

  • Reproducibility: procedures for documenting user evaluations and ensuring they are reproducible.

  • Data: Introduce novel datasets for training and evaluating interactive models.

To investigate scenarios where interactive learning is effective, we invite submissions that present empirical results for applications of interactive methods.

Related Workshops

  • Workshop on Data Science with Human-in-the-loop: Language Advances (DaSH-LA) (co-located with NAACL 2021)