Schedule
Date: Friday, August 4, 2017
Session 1
09:30--09:40 Welcome and Opening Remarks
09:40--10:30 Keynote 1 - Chris Dyer
10:30--11:00 Coffee Break
Session 2
11:00--11:50 Keynote 2 - Alexander Rush
11:50--12:20 Best Paper Session
Outstanding Paper: An Empirical Study of Adequate Vision Span for Attention-Based Neural Machine Translation. Raphael Shu and Hideki Nakayama
Best Paper: Stronger Baselines for Trustable Results in Neural Machine Translation. Michael Denkowski and Graham Neubig
12:20--13:40 Lunch Break
Session 3
13:40--14:30 Keynote 3 - Kevin Knight
14:30--15:20 Keynote 4 - Quoc Le
15:20--15:30 Poster Session (Papers)
15:30--16:10 Poster Session (continued) and Coffee Break
Session 4
16:10--17:30 Panel Discussion (Chris Dyer, Alexander Rush, Kevin Knight, Quoc Le, Kyunghuyn Cho)
17:30--17:40 Closing Remarks
Keynote 1 - Chris Dyer
Title: The Neural Noisy Channel: Generative Models for Sequence to Sequence Modeling
Abstract
The first statistical models of translation relied on Bayes' rule to
factorize the probability of an output translation given an input into
two component probabilities: a target language model prior probability
(how likely is a candidate output?), and an inverse translation
probability (how likely is the observed input given a candidate
output?). Although this factorization has largely been abandoned in
favor of discriminative models that directly estimate the probability
of producing an output translation given an input, these
discriminative models suffer from a number of problems, including
undesirable explaining-away effects during training (e.g., label
bias), and difficulty learning from unpaired samples in training. In
contrast, generative models based on the Bayes' rule factorization
must produce outputs that explain their inputs, and training with
unpaired samples (i.e., target language monolingual corpora) is
straightforward. I discuss the challenges and opportunities afforded
by generative models of sequence to sequence transduction, reporting
results on machine translation and abstractive summarization.
(This is joint work with Lei Yu, Tomas Kocisky, and Phil Blunsom.)
Keynote 2 - Alexander Rush
Title: Challenges in Neural Document Generation
Abstract:
Advances in neural machine translation have led to optimism for
natural language generation in tasks such as summarization and dialogue,
but it has been difficult to quantify what challenges remain in neural NLG.
In this talk, I will discuss recent work on long-form data-to-document generation
using a new dataset pairing comprehensive basketball game statistics with
full game descriptions, a classic NLG task. While state-of-the-art NMT systems
produce fluent output on this task, the generated documents are clearly insufficient
and suffer from basic issues in discourse, reference, and referring expression generation.
Recent tricks such as copy and coverage lead to clear improvements, but results for
end-to-end generation are not yet competitive for long-form documents.
Overall, neural document generation presents a difficult but interesting challenge
that may require different techniques than standard NMT.
Keynote 3 - Kevin Knight
Title: What is Neural MT Learning?
Abstract:
In this talk, I will observe what neural MT decides to extract from
source sentences, as a by-product of its end-to-end training. I will
also speculate about the power of neural MT-style networks, both in
general and with respect to how they are currently trained.
Keynote 4 - Quoc Le
Title: Google's Neural Machine Translation system
Abstract:
I will talk about the history of neural machine translation at Google
and some of our recent work on deploying neural machine translation at scale.