The 3rd Workshop on Neural Generation and Translation (WNGT 2019)
Neural sequence to sequence models are now a workhorse behind a wide variety of different natural language processing tasks such as machine translation, generation, summarization and simplification. This workshop aims to provide a forum for research in applications of neural models to language generation and translation tasks (including machine translation, summarization, NLG from structured data, dialog response generation, among others).
This is the third workshop in the series, preceded by the Second Workshop on Neural Machine Translation and Generation (WNMT 2018), which was held at ACL 2018 and attracted more than 120 participants with 16 accepted papers from 25 submissions. Notably, the accepted papers covered not only algorithmic advances similar to those presented at the main conference, but also a number of high-quality papers analyzing the current state of affairs in neural MT and generation, which were of great interest to the focused research community that the workshop attracted. This year, we aim to complement the main conference with which WNGT is located by trying to achieve the following goals:
- Synthesize the current state of knowledge in neural machine translation and generation: This year we will continue to encourage submissions that not only advance the state of the art through algorithmic advances, but also analyze and understand the current state of the art, pointing to future research directions. Based on the success last year, we may also hold a panel session attempting to answer major questions -- both specific (what methods do we use to optimize our systems? how do we perform search?), and general (what will be the long-lasting ideas, which problems have been essentially solved, etc.) as well as highlight potential areas of future research.
- Expanding the research horizons in neural generation and translation: Continuing from last year, we are organizing shared tasks. Specifically, the first shared task will be on “Efficient NMT”, focusing on developing MT systems achieve not only translation accuracy but also memory efficiency or translation speed, which are paramount concerns in practical deployment settings. Last year the task attracted 4 teams and we expect the number to be equal to or greater than this next year. The second task is on “Document Level Generation and Translation”, where we will prepare datasets for generating textual documents from either structured data, or documents in another language. We plan for this to be both a task that pushes forward document-level generation technology, and also a way to compare and contrast methods for generating from different types of inputs.
We will also feature invited talks from leading researchers in the field (last year: Jacob Devlin, Andre Martins, Rico Sennrich, Yulia Tsvetkov, this year confirmed: Mohit Bansal, Mirella Lapata, He He), and also accept submissions of both completed and forward-looking work which will be presented either as oral presentations or during a poster session.