Program

Schedule - July 14th, 2022

Venue: Columbia D (except Poster Session II)

Link to the Zoom room can be found on the above underline page.


  • 08:50–09:00 - Opening remark

  • 09:00–09:45 - Invited talk: Sebastian Ruder (Virtual)

  • 09:45-10:10 - Spotlight Papers Presentation

    • 09:45-09:57 - Oral: Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese

    • 09:58-10:10 - Oral: Clean or Annotate: How to Spend a Limited Data Collection Budget

  • 10:10-10:30. - ☕ Coffee break ☕


  • 10:30-11:15 - Invited talk: David Ifeoluwa Adelani

  • 11:15-12:15 - Poster session I (Virtual)

  • 12:15–13:30 - 🍱 Lunch break 🍱


  • 13:30–14:15. - Invited talk: Yulia Tsvetkov

  • 14:15–15:15. - Poster session II @ Regency Ballroom on the 7th floor and Poster session III (Virtual)

  • 15:15–15:30 - ☕ Coffee break ☕


  • 15:30–16:15 - Invited talk: Graham Neubig

  • 16:15–16:30 - Closing remark


Events are in-person, virtual events are so indicated. All in-person as well as virtual events will be live-streamed through Underline.

Poster session I

Virtual - 11:15-12:15


  • Unsupervised Knowledge Graph Generation Using Semantic Similarity Matching

  • Few-shot Learning for Sumerian Named Entity Recognition

  • AfriTeVA: Extending “Small Data” Pretraining Approaches to Sequence-to-Sequence Models

  • FarFetched: Entity-centric Reasoning and Claim Validation for the Greek Language based on Textually Represented Environments

  • Alternative non-BERT model choices for the textual classification in low-resource languages and environments

  • Deep Learning-Based Morphological Segmentation for Indigenous Languages: A Study Case on Innu-Aimun

  • Improving Distantly Supervised Document-Level Relation Extraction Through Natural Language Inference

  • Cross-TOP: Zero-Shot Cross-Schema Task-Oriented Parsing

  • Unified NMT models for the Indian subcontinent, transcending script-barriers

  • Punctuation Restoration in Spanish Customer Support Transcripts using Transfer Learning

  • Help from the Neighbors: Estonian Dialect Normalization Using a Finnish Dialect Generator

  • Generating Complement Data for Aspect Term Extraction with GPT-2

  • Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese

  • Exploring diversity in back translation for low-resource machine translation

  • How to Translate Your Samples and Choose Your Shots? Analyzing Translate-train & Few-shot Cross-lingual Transfer

Poster session II

In-person - 14:15–15:15 @ Regency Ballroom on the 7th floor


  • Let the Model Decide its Curriculum for Multitask Learning

  • ANTS: A Framework for Retrieval of Text Segments in Unstructured Documents

  • Clean or Annotate: How to Spend a Limited Data Collection Budget

  • IDANI: Inference-time Domain Adaptation via Neuron-level Interventions

  • QuBERT: A Large Monolingual Corpus and BERT Model for Southern Quechua

  • Task Transfer and Domain Adaptation for Zero-Shot Question Answering

  • Generating unlabelled data for a tri-training approach in a low resourced NER task

  • Exploring diversity in back translation for low-resource machine translation

Note: All papers in the in-person session (Poster Session II) will have a virtual poster slot (optional) in the morning virtual session (Poster Session I) in case the authors want to present both virtually and in-person.

Poster session III

Virtual - 14:15–15:15


  • Event Extractor with Only A Few Examples

  • Clean or Annotate: How to Spend a Limited Data Collection Budget

  • IDANI: Inference-time Domain Adaptation via Neuron-level Interventions

  • QuBERT: A Large Monolingual Corpus and BERT Model for Southern Quechua

  • Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese

We will keep running a virtual session in parallel with Poster session II. This is to be used for presenters allocated to Poster Session I that could not attend their allocated morning slot (or want to present their work again). This session could also to be used (optionally) by the in-person presenters of Poster Session II.