Relevance of Linguistic Structure in Neural NLP

INVITED SPEAKERS

Chris Dyer , Emily M. Bender, Jason Eisner, Mark Johnson

------------------------------------------------------------------------------------------------------------------

Program

8:50--9:00 Opening Remarks

9:00--10:00 Invited Talk: Chris Dyer

10:00--10:20 Talk: Compositional Morpheme Embeddings with Affixes as Functions and Stems as Arguments, Daniel Edmiston and Karl Stratos

10:20--11:00 Break

11:00--12:00 Invited Talk: Mark Johnson

What does Deep Learning tell us about Language?

12:00--12:20 Talk: Unsupervised Source Hierarchies for Low-Resource Neural Machine Translation, Anna Currey and Kenneth Heafield

12:20--13:30 Lunch Break

13:30--14:30 Poster session

14:30--15:30 Invited Talk: Jason Eisner

15:30--16:00 Break

16:00--17:00 Invited Talk: Emily M. Bender

Why general purpose NLU needs linguistics

------------------------------------------------------------------------------------------------------------------

New dates

  • Submission deadline: April 25
  • Notification of acceptance: May 21
  • Camera-ready due: May 28
  • Workshop in Melbourne: July 19

------------------------------------------------------------------------------------------------------------------

There is a long standing tradition in NLP focusing on fundamental language modeling tasks such as morphological analysis, POS tagging, parsing, WSD or semantic parsing. In the context of end-user NLP tasks, these have played the role of enabling technologies, providing a layer of representation upon which more complex tasks can be built.

However, in recent years we have witnessed a number of success stories for tasks ranging from information extraction or text comprehension to machine translation, for which the use of embeddings and neural networks has driven state of the art results to new levels. More importantly, these are often end-to-end architectures trained on large amounts of data and making little or no use of a linguistically-informed language representation layer. For example, the modeling of word senses and word sense disambiguation are implicit in the functional composition of word embeddings. Other topics such as linear sentence processing versus syntactic parses or frequency-based word segmentation versus morphological analysis are still up for debate.

This workshop focuses on the role of linguistic structures in the neural network era. We aim to gauge their significance in building better, more generalizable NLP systems. We would like to address the following questions:

  1. Is linguistic information useful for neural network architectures: can it improve state of the art neural architectures, and how should it be used? Does it help in building models that transfer better to new domains, new languages, new tasks, or for other limited annotated data scenarios?
  2. Are there any better implicit representations that neural networks can extract, whether similar or not to linguistic structures, that can be transferred or shared across tasks and, hence, serve as core language representation layers?