Overview

Welcome to the home page of The 2nd Workshop on Neural Machine Translation and Generation (WNMT 2018) to be held in Melbourne on July 20, 2018 after ACL!

Neural sequence to sequence models are now a workhorse behind a wide variety of different natural language processing tasks such as machine translation, generation, summarization and simplification. This workshop aims to provide a forum for research in applications of neural models to machine translation and other language generation tasks (including summarization, NLG from structured data, dialog response generation, among others).

The First Workshop on Neural Machine Translation (WNMT 2017) was held at ACL 2017 and attracted more than 160 participants with 15 accepted papers from 24 submissions. Notably, the accepted papers covered not only algorithmic advances similar to those presented at the main conference, but also a number of high-quality papers analyzing the current state of affairs in neural MT, which were of great interest to the focused research community that the workshop attracted. This year, we aim to complement the main conference with which WNMT is located by trying to achieve the following goals:

  1. Synthesize the current state of knowledge in neural machine translation and generation: This year we will continue to encourage submissions that not only advance the state of the art through algorithmic advances, but also analyze and understand the current state of the art, pointing to future research directions. Based on the success last year, we may also hold a panel session attempting to answer major questions -- both specific (what methods do we use to optimize our systems? how do we perform search?), and general (what will be the long-lasting ideas, which problems have been essentially solved, etc.) as well as highlight potential areas of research in the future..
  2. Expanding the research horizons in NMT: Based on panel discussions from last year’s NMT workshop, we are organizing a shared task. Specifically, the shared task will be on “Efficient NMT”. The aim of this task is to focus on not only accuracy but memory efficiency or translation speed, which are paramount concerns in practical deployment settings. The workshop will provide a set of baselines for the task, and reward systems that help push forward the Pareto frontier of both efficiency and accuracy. We estimate this task attract about 10 teams.

We will also feature invited talks from leading researchers (last year: Chris Dyer, Kevin Knight, Alexander Rush, Quoc Le, this year confirmed: Jacob Devlin, Rico Sennrich) in the field, and also accept submissions of both completed and forward-looking work which will be presented either as oral presentations or during a poster session.

Topics of Interest

The workshop is broad in scope and invites original research contributions on all topics where neural networks are involved in the field of machine translation. The topics of interest include, but are not limited to the following:

  • Neural models for machine translation, generation, summarization, simplification
  • Analysis of the problems and opportunities of neural models for all of these tasks
  • Methods for incorporating linguistic insights: syntax, alignment, reordering, etc.
  • Handling resource-limited domains
  • Utilizing more data: monolingual, multilingual resources
  • Multi-task learning
  • Neural translation and generation models for mobile devices
  • Visualization of sequence-to-sequence models
  • Beyond sentence-level processing
  • Beyond maximum-likelihood estimation

Invited Speakers

The below invited speakers have accepted our invitations, and more speakers are pending.

  1. Jacob Devlin (Google)
  2. Jason Weston (Facebook)
  3. Rico Sennrich (Edinburgh)
  4. Yulia Tsvetkov (CMU)

Venue

ACL'18

Workshop Logistics

  • Length: 1-day
  • Estimated attendees: 150