RepL4NLP 2025
The 10th Workshop on Representation Learning for NLP (RepL4NLP 2025) will be hosted by NAACL 2025 and held on 3 May (or 4th May) 2025. The workshop is being organised by Vaibhav Adlakha, Alexandra Chronopoulou, Xiang Lorraine Li, Bodhisattwa Prasad Majumder, Freda Shi, and Giorgos Vernikos; and advised by Isabelle Augenstein, Anna Rogers, Kyunghyun Cho, Edward Grefenstette, Elena Voita, and Nora Kassner. The workshop is organised by the ACL Special Interest Group on Representation Learning (SIGREP).
The 10th Workshop on Representation Learning for NLP aims to continue the success of the Repl4NLP workshop series. The workshop was introduced as a synthesis of several years of independent *CL workshops focusing on vector space models of meaning, compositionality, and the application of deep neural networks and spectral methods to NLP. It provides a forum for discussing recent advances on these topics, as well as future research directions in linguistically motivated vector-based models in NLP. The workshop will take place in a hybrid setting, and, as in previous years, feature interdisciplinary keynotes, paper presentations, posters, as well as a panel discussion.
Key Dates
Direct paper submission deadline: January 30, 2025
ARR commitment deadline: February 20, 2025
Notification of acceptance: March 1, 2025
Camera-ready paper due: March 10, 2025
Pre-recorded video due: April 8, 2025
Workshop date: May 3 (or May 4) 2025
Keynote Speakers
To be announced soon!
Topics
Efficient learning of representations and inference as models scale up: with respect to the amount of training and fine-tuning data, training and inference computing time, and model energy consumption for development and deployment.
Investigating the representation dynamics during training: understanding how representations evolve throughout the training process
Evaluating existing representations: probing representations for generalization, compositionality, robustness, etc.
Understanding the relationship between representations and model behaviors: how learned representations drive predictions, how interventions in the representation space causally affect the model behaviors, and what types of representations lead to better simulation of human behaviors, etc.
Beyond English textual representation: including but not limited to cross-modal, cross-lingual, knowledge-informed, linguistically-informed, and cognitively plausible representations, and how data from different sources interact in the training and inference processes.
Developing new representations: at various levels, using language model objectives, spectral methods, neuro-symbolic methods, etc.