The 3rd Workshop on Neural Generation and Translation (WNGT 2019)

News: There were 179 attendees!

Proceedings:

Results of the shared task:

Findings-WNGT19

News 22/10/2019: Please follow these instructions for printing posters:

https://www.emnlp-ijcnlp2019.org/participants/presentation/#poster-size

News (8/2/2019): We will be able to provide a limited number of student travel grants for those who would like to participate, but need help with travel. Please contact the organizers after submitting your paper if you would be interested in receiving one!

WNGT will be held at EMNLP-IJCNLP 2019 https://www.emnlp-ijcnlp2019.org/. The workshop will occur November 4, 2019 at the conference venue at the Asia World Expo in Hong Kong.

Neural sequence to sequence models are now a workhorse behind a wide variety of different natural language processing tasks such as machine translation, generation, summarization and simplification. This workshop aims to provide a forum for research in applications of neural models to language generation and translation tasks (including machine translation, summarization, NLG from structured data, dialog response generation, among others).

This is the third workshop in the series, preceded by the Second Workshop on Neural Machine Translation and Generation (WNMT 2018), which was held at ACL 2018 and attracted more than 120 participants with 16 accepted papers from 25 submissions. Notably, the accepted papers covered not only algorithmic advances similar to those presented at the main conference, but also a number of high-quality papers analyzing the current state of affairs in neural MT and generation, which were of great interest to the focused research community that the workshop attracted. This year, we aim to complement the main conference with which WNGT is located by trying to achieve the following goals:


  1. Synthesize the current state of knowledge in neural machine translation and generation: This year we will continue to encourage submissions that not only advance the state of the art through algorithmic advances, but also analyze and understand the current state of the art, pointing to future research directions. Based on the success last year, we may also hold a panel session attempting to answer major questions -- both specific (what methods do we use to optimize our systems? how do we perform search?), and general (what will be the long-lasting ideas, which problems have been essentially solved, etc.) as well as highlight potential areas of future research.
  2. Expanding the research horizons in neural generation and translation: Continuing from last year, we are organizing shared tasks. Specifically, the first shared task will be on “Efficient NMT”, focusing on developing MT systems which achieve not only translation accuracy but also memory efficiency and/or translation speed, which are paramount concerns in practical deployment settings. Last year the task attracted 4 teams and we expect the number to be equal to or greater than this next year. The second task is on “Document Level Generation and Translation”, where we have prepared a dataset for generating textual documents from either structured data, or documents in another language. We plan for this to be both a task that pushes forward document-level generation technology, and also a way to compare and contrast methods for generating from different types of inputs.


We also have excellent invited talks from leading researchers in the field:

  • Michael Auli
  • Mohit Bansal
  • Nanyun Peng
  • Jason Weston

Last year we had Jacob Devlin, Andre Martins, Rico Sennrich and Yulia Tsvetkov.

This year we also accept submissions of both completed and forward-looking work which will be presented either as oral presentations or during a poster session.