The 4th Workshop on Neural Generation and Translation (WNGT 2020)

See Programme Page for links to live talks, live poster chats and links to recorded talks

Virtual Workshop page nearly ready for Friday 10th:

Published Programme:

NEWS 11 June: Prerecorded paper talks can now be up to 10 minutes long

NEWS: We have a Programme section

NEWS: We have delayed the submission deadline due to the Coronavirus epidemic:

  • Submission deadline: 6 April 2020 13 April 2020
  • Notification of acceptance: 4 May 2020 11 May 2020

NEWS: Our Softconf paper submission link is:

NEWS: We have a new group mailing list for any queries about the tasks or paper submissions, please join the group or view the archive here:!forum/wngt-info

NEWS: WNGT will be held on the 10th July

NEWS: Our Efficiency and DGT tasks have now been launched!

NEWS: WNGT 2020 will be held at ACL in Seattle on the 9 or 10 July 2020.

Neural sequence to sequence models are now a workhorse behind a wide variety of different natural language processing tasks such as machine translation, generation, summarization and simplification. This workshop aims to provide a forum for research in applications of neural models to language generation and translation tasks (including machine translation, summarization, NLG from structured data, dialog response generation, among others).

This is the fourth workshop in the series, preceded by the Third Workshop on Neural Generation and Translation (WNGT2019), which will be held at EMNLP 2019 with 36 accepted papers from 68 submissions. The second workshop at ACL in Melbourne 2018 attracted more than 120 participants with 16 accepted papers from 25 submissions. Notably, the accepted papers cover not only algorithmic advances similar to those presented at the main conference, but also a number of high-quality papers analyzing the current state of affairs in neural MT and generation, which were of great interest to the focused research community that the workshop attracted. This year, we aim to complement the main conference with which WNGT is located by trying to achieve the following goals:

  1. Synthesize the current state of knowledge in neural machine translation and generation: This year we will continue to encourage submissions that not only advance the state of the art through algorithmic advances, but also analyze and understand the current state of the art, pointing to future research directions. Based on the success last year, we may also hold a panel session attempting to answer major questions -- both specific (what methods do we use to optimize our systems? how do we perform search?), and general (what will be the long-lasting ideas, which problems have been essentially solved, etc.) as well as highlight potential areas of future research.
  2. Expanding the research horizons in neural generation and translation: Continuing from last year, we are organizing shared tasks. Specifically, the first shared task will be on “Efficient NMT”, focusing on developing MT systems achieve not only translation accuracy but also memory efficiency or translation speed, which are paramount concerns in practical deployment settings. This year the task attracted 3 teams and we expect the number to be equal to or greater than this next year. The second task is on “Document Level Generation and Translation”, where the challenge is generating textual documents from either structured data, or documents in another language. This year the task attracted 4 teams. We plan for this to be both a task that pushes forward document-level generation technology, and also a way to compare and contrast methods for generating from different types of inputs.


  • First call for papers: 11 December 2019
  • Second call for papers: 20 January 2020
  • Submission deadline: 6 April 2020 13 April 2020
  • Notification of acceptance: 4 May 2020 11 May 2020 (+4w after submission)
  • Camera-ready papers due: 18 May 2020 (+1w after notification)


The people below have accepted to speak at our workshop:

  • He He
  • Jiatao Gu
  • Shashi Narayan
  • Claire Gardent


The workshop is broad in scope and invites original research contributions on all topics where neural networks are involved in the field of machine translation. The topics of interest include, but are not limited to the following:

* Neural models for machine translation, generation, summarization, simplification

* Analysis of the problems and opportunities of neural models for all of these tasks

* Methods for incorporating linguistic insights: syntax, alignment, reordering, etc.

* Handling resource-limited domains

* Utilizing more data: monolingual, multilingual resources

* Multi-task learning

* Neural translation and generation models for mobile devices

* Visualization of sequence-to-sequence models

* Beyond sentence-level processing

* Beyond maximum-likelihood estimation

If you want to contact the organizers, you can email