RepL4NLP 2023
Announcement 26 June: The workshop schedule is available now!
The 8th Workshop on Representation Learning for NLP (RepL4NLP 2023) will be hosted by ACL 2023 and held on 13 July 2023. The workshop is being organised by Burcu Can, Maximilian Mozes, Samuel Cahyawijaya, Naomi Saphra, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, and Chen Zhao; and advised by Isabelle Augenstein, Anna Rogers, Kyunghyun Cho, and Edward Grefenstette. The workshop is organised by the ACL Special Interest Group on Representation Learning (SIGREP).
The 8th Workshop on Representation Learning for NLP aims to continue the success of the Repl4NLP workshop series, with the 1st Workshop on Representation Learning for NLP having received about 50 submissions and over 250 attendees - the second most attended collocated event at ACL'16 after WMT. The workshop was introduced as a synthesis of several years of independent *CL workshops focusing on vector space models of meaning, compositionality, and the application of deep neural networks and spectral methods to NLP. It provides a forum for discussing recent advances on these topics, as well as future research directions in linguistically motivated vector-based models in NLP. The workshop will take place in a hybrid setting, and, as in previous years, feature interdisciplinary keynotes, paper presentations, posters, as well as a panel discussion.
Key Dates
Direct paper submission deadline: April 24, 2023
Direct paper submission deadline: April 28, 2023
ACL 2023 fast-track deadline: May 15, 2023
ARR commitment deadline: May 15, 2023
Notification of acceptance: May 22, 2023 (delayed to May 23, 2023)
Camera-ready paper due: May 30, 2023
Pre-recorded video due: June 12, 2023
Workshop date: July 13, 2023
⚠️ ACL 2023 fast-track and findings notifications will be sent out on 30th May, 2023. The camera ready deadline for these papers is 6th June, 2023. ⚠️
Keynote Speakers
Swabha Swayamdipta (University of Southern California)
Samira Abnar (Apple)
Hannaneh Hajishirzi (University of Washington & Allen AI)
Omer Levy (Tel Aviv University & Meta AI)
Topics
Developing new representations: at the document, sentence, word, or sub-word level, using language model objectives, word embeddings, spectral methods, etc.
Evaluating existing representations: probing representations for generalization, compositionality & robustness, adversarial evaluation, analysis of representations.
Efficient learning of representations and inference: with respect to training and inference time, model size, amount of training data, etc.
Beyond English / text representations: multi-modal, cross-lingual, knowledge-informed embeddings, structure-informed embeddings (syntax, morphology), etc.
The relation between representations and the model's behavior: how do representations eventually lead to the buildup of predictions, and how do interventions in the representation space causally affect the model's behavior.