Repl4NLP 2022

Announcement May 29th: Thank you everyone for attending the workshop! If you have any feedback, please email us (repl4nlp@googlegroups.com)!

Announcement April 20th: The workshop schedule is available now!

Announcement Feb 28th: We extended the deadline to be March 5th, 2022.


The 7th Workshop on Representation Learning for NLP (RepL4NLP 2022) will be hosted by ACL 2022 and held on 26 May 2022. The workshop is being organised by Spandana Gella, He He, Burcu Can, Maximilian Mozes, Eleonora Giunchiglia, Sewon Min, Samuel Cahyawijaya, Xiang Lorraine Li and Bodhisattwa Prasad Majumder; and advised by Isabelle Augenstein, Anna Rogers, Kyunghyun Cho, Edward Grefenstette, Chris Dyer and Laura Rimell. The workshop is organised by the ACL Special Interest Group on Representation Learning (SIGREP).

The 7th Workshop on Representation Learning for NLP aims to continue the success of the Repl4NLP workshop series, with the 1st Workshop on Representation Learning for NLP having received about 50 submissions and over 250 attendees -- the second most attended collocated event at ACL'16 after WMT). The workshop was introduced as a synthesis of several years of independent *CL workshops focusing on vector space models of meaning, compositionality, and the application of deep neural networks and spectral methods to NLP. It provides a forum for discussing recent advances on these topics, as well as future research directions in linguistically motivated vector-based models in NLP. The workshop will take place in a hybrid setting, and, as in previous years, feature interdisciplinary keynotes, paper presentations, posters, as well as a panel discussion.


Key Dates

Deadline for paper submission: Feb 28th, 2022 March 5th, 2022 (11:59pm Anywhere on Earth)

Deadline for submission with ARR reviews: March 21st, 2022

Notification of acceptance: March 26th, 2022

Camera ready submission due: April 10th, 2022

Early registration deadline: TBA

Workshop: May 26th, 2022, Dublin, Ireland as well as virtual


Keynote Speakers

Been Kim, Google Brain

Emma Strubell, Carnegie Mellon University

Monojit Choudhury, Microsoft Turing, India

Percy Liang, Stanford University

Sebastian Riedel, University College London & Meta AI Patrick Lewis, Meta AI

(Please see the Invited Speakers page for more information!)


Panel Discussion

Special theme: Traditional (Vector) representations & Language model representation

Panel members:

  • Been Kim, Google Brain

  • Emma Strubell, Carnegie Mellon University

  • Monojit Choudhury, Microsoft Turing, India

  • Sebastian Riedel, University College London & Meta AI

  • Omer Levy, Tel Aviv University

  • Eunsol Choi, U Texas Austin

  • Mohit Iyyer, U Mass Amherst


Moderator:


Topics

  • Developing new representations: at the document, sentence, word, or sub-word level, using language model objectives, word embeddings, spectral methods, etc.

  • Evaluating existing representations: probing representations for generalization, compositionality & robustness, adversarial evaluation, analysis of representations.

  • Efficient learning of representations and inference: with respect to training and inference time, model size, amount of training data, etc.

  • Beyond English / text representations: multi-modal, cross-lingual, knowledge-informed embeddings, structure-informed embeddings (syntax, morphology), etc.



Sponsors