Co-located with EMNLP 2022

EvoNLP

The First Workshop on Ever Evolving NLP

EvoNLP, the First Workshop on Ever Evolving NLP, a forum to discuss the challenges posed by the dynamic nature of language in the specific context of the current NLP paradigm, dominated by language models. In addition to regular research papers, at EvoNLP we will have invited speakers from both industry and academia providing insights on the challenges involved in two main areas, namely data and models. The workshop will also feature a shared task on time-aware Word-in-Context classification.

Important Dates

(Anywhere on Earth)

  • Submission deadline (for papers requiring review / non-archival): 12 October, 2022 [Extended!]

  • Submission deadline (with ARR reviews): 25 October, 2022

  • Notification of acceptance: 31 October, 2022

  • Camera-ready paper deadline: 11 November, 2022

  • Workshop date: 7 December, 2022


Program

Times in United Arab Emirates (GMT+4) (conversion table)
Location: Capital Suite 3
Undeline, Zoom

09:00 - 09:30 Opening remarks (slides)

09:30 - 10:00 Jacob Eisenstein - What can we learn from language change?

10:00 - 10:30 Eunsol Choi - Knowledge-rich NLP models in a dynamic real world

10:30 - 11:00 Adam Jatowt - Automatic Question Answering over Temporal News Collections

11:00 - 12:30 Workshop poster session (virtual and on-site)

  • Temporal Word Meaning Disambiguation using TimeLMs
    M. Godbole, P. Dandavate, A. Kane

  • HSE at TempoWiC: Detecting Meaning Shift in Social Media with Diachronic Language Models
    E. Tukhtina, S. Vydrina, K. Kashleva

  • MLLabs-LIG at TempoWiC 2022: A Generative Approach for Examining Temporal Meaning Shift
    C. Lyu, Y. Zhou, T. Ji

  • Using Deep Mixture-of-Experts to Detect Word Meaning Shift for TempoWiC
    Z. Chen, K. Wang, Z. Cai, J. zheng, J. He, M. Gao, J. Zhang

  • Knowledge Unlearning for Mitigating Privacy Risks in Language Models,
    J. Jang, D. Yoon, S. Yang, S. Cha, M. Lee, L. Logeswaran, M. Seo

  • TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
    J. Jang, S. Ye, C. Lee, S. Yang, J. Shin, J. Han, G. Kim, M. Seo

12:30 - 14:00 Lunch break

14:00 - 15:00 Findings and non-archival session (6 min for presentation)

Findings:

  • Semi-Supervised Lifelong Language Learning
    Y. Zhao, Y. Zheng, B. Yu, Z. Tian, D. Lee, J. Sun, H. Yu, Y. Li, N. L. Zhang

  • The challenges of temporal alignment on Twitter during crises
    A. Pramanick, T. Beck, K. Stowe, I. Gurevych

  • LPC: A Logits and Parameter Calibration Framework for Continual Learning
    X. Li, Z. Wang, D. Li, L. Khan, B.Thuraisingham

  • On the Impact of Temporal Concept Drift on Model Explanations
    Z. Zhao, G. Chrysostomou, K. Bontcheva, N. Aletras

Non-Archival:

  • Knowledge Unlearning for Mitigating Privacy Risks in Language Models
    J. Jang, D. Yoon, S. Yang, S. Cha, M. Lee, L. Logeswaran, M. Seo

  • TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
    J. Jang, S. Ye, C. Lee, S. Yang, J. Shin, J. Han, G. Kim, M. Seo

  • REACT: Synergizing Reasoning And Acting In Language Models
    S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, Y. Cao

15:00 - 15:30 Coffee break

15:30 - 16:00 Nazneen Rajani - Takeaways from a systematic study of 75K models on Hugging Face

16:00 - 16:30 Ozan Sener - Going from Continual Learning Algorithms to Continual Learning Systems

16:30 - 17:00 Workshop oral session (6 min for presentation)

  • Leveraging time-dependent lexical features for offensive language detection
    B. McGillivray, M. Alahapperuma, J. Cook, C. D. Bonaventura, A. Meroño-Peñuela, G. Tyson, S. R. Wilson

  • CC-Top: Constrained Clustering for Dynamic Topic Discovery
    J. Goschenhofer, P. Ragupathy, C. Heumann, B. Bischl, M. Aßenmacher

  • Class Incremental Learning for Intent Classification with Limited or No Old Data
    D.Paul, D. Sorokin, J. Gaspers

17:00 - 17:30 Shared task session (6 min for presentation)

      • Using Two Losses and Two Datasets Simultaneously to Improve TempoWiC Accuracy
        M. J. Pirhadi, M.Mirzaei, S.. Eetemadi

      • Using Deep Mixture-of-Experts to Detect Word Meaning Shift for TempoWiC
        Z. Chen, K. Wang, Z. Cai, J. Zheng, J. He, M. Gao, J. Zhang

17:30 - 18:00 Best paper awards and closing

🏆 Workshop best paper
CC-Top: Constrained Clustering for Dynamic Topic Discovery
J. Goschenhofer, P. Ragupathy, C. Heumann, B. Bischl, M. Aßenmacher

🏆 Shared task best paper
Using Deep Mixture-of-Experts to Detect Word Meaning Shift for TempoWiC
Z. Chen, K. Wang, Z. Cai, J. Zheng, J. He, M. Gao, J. Zhang

Workshop Topics

Dynamic Benchmarks: Evaluation of Model Degradation in Time


How do NLP models age? and how can this be measured?


  • How is time evaluation done in NLP? What is the effect of using random splits versus using splits in the past/future?

  • What tasks are most affected by time?

  • How often should we change models? Is the time-effect short (some days) or long (years)?

  • Can we predict if a model will degrade in time on a certain domain? Can we assess language change in this domain?

  • Do all models degrade equally or are some architectures more resilient?


Time-Aware Models


If we are able to measure the time degradation effect, how can we design a model that takes time in account? How can we update or replace future "inaccurate" models?


  • How can we design time-aware models that have a reduced degradation in time?

  • If we plan to update a model, what is the best solution? We could either update an existing model or replace it completely. What are the consequences of these two solutions?

  • Once a model is updated (or replaced), how can this new model be compatible with previous models used for the same specific task?


Submissions

See Call For Papers

Invited Speakers

University of Austin, Texas

University of Innsbruck, Austria

Hugging Face

Organizers

Francesco Barbieri

Snap Research

Jose Camacho-Collados

Cardiff University

Bhuwan Dhingra

Duke University

Luis Espinosa-Anke

Cardiff University

Elena Gribovskaya

Deepmind

Angeliki Lazaridou

Deepmind

Daniel Loureiro

Cardiff University

Leonardo Neves

Snap Research