Accepted Papers

Full Workshop Proceedings

Best Papers

  • Learning to Embed Words in Context for Syntactic Tasks
    Lifu Tu, Kevin Gimpel, Karen Livescu
  • Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context
    Shyam Upadhyay, Kai-Wei Chang, Matt Taddy, Adam Kalai, James Zou
  • Emergent Predication Structure in Hidden State Vectors of Neural Readers
    Hai Wang, Takeshi Onishi, Kevin Gimpel, David McAllester
Long Papers
  • Sense Contextualization in a Dependency-Based Compositional Distributional Model
    Pablo Gamallo
  • Machine Comprehension by Text-to-Text Neural Question Generation
    Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Saizheng Zhang, Sandeep Subramanian, Adam Trischler
  • Emergent Predication Structure in Hidden State Vectors of Neural Readers
    Hai Wang, Takeshi Onishi, Kevin Gimpel, David McAllester
  • Transfer Learning for Neural Semantic Parsing
    Xing Fan, Emilio Monti, Lambert Mathias, Markus Dreyer
  • Modeling Large-Scale Structured Relationships with Shared Memory for Knowledge Base Completion
    Yelong Shen, Po-Sen Huang, Ming-Wei Chang, Jianfeng Gao
  • Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines
    Jan Rygl, Jan Pomikálek, Radim Řehůřek, Michal Růžička, Vít Novotný, Petr Sojka
  • Multi-task Domain Adaptation for Sequence Tagging
    Nanyun Peng and Mark Dredze
  • Beyond Bilingual: Multi-sense Word Embeddings using Multilingual Context
    Shyam Upadhyay, Kai-Wei Chang, Matt Taddy, Adam Kalai, James Zou
  • DocTag2Vec: An Embedding Based Multi-label Learning Approach for Document Tagging
    Sheng Chen, Akshay Soni, Aasish Pappu, Yashar Mehdad
  • Binary Paragraph Vectors
    Karol Grzegorczyk and Marcin Kurdziel
  • Representing Compositionality based on Multiple Timescales Gated Recurrent Neural Networks with Adaptive Temporal Hierarchy for Character-Level Language Models
    Dennis Singh Moirangthem, Jegyung Son, Minho Lee
  • Prediction of Frame-to-Frame Relations in the FrameNet Hierarchy with Frame Embeddings
    Teresa Botschen, Hatem Mousselly Sergieh, Iryna Gurevych
  • Learning Joint Multilingual Sentence Representations with Neural Machine Translation
    Holger Schwenk and Matthijs Douze
  • Transfer Learning for Speech Recognition on a Budget
    Julius Kunze, Louis Kirsch, Ilia Kurenkov, Andreas Krug, Jens Johannsmeier, Sebastian Stober
  • Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis
    Shima Asaadi and Sebastian Rudolph
  • NewsQA: A Machine Comprehension Dataset
    Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman
  • Intrinsic and Extrinsic Evaluation of Spatiotemporal Text Representations in Twitter Streams
    Lawrence Phillips, Kyle Shaffer, Dustin Arendt, Nathan Hodas, Svitlana Volkova
  • Rethinking Skip-thought: A Neighborhood based Approach
    Shuai Tang, Hailin Jin, Chen Fang, Zhaowen Wang, Virginia de Sa
  • A Frame Tracking Model for Memory-Enhanced Dialogue Systems
    Hannes Schulz, Jeremie Zumer, Layla El Asri, Shikhar Sharma
  • Adversarial Generation of Natural Language
    Sandeep Subramanian, Sai Rajeswar, Francis Dutil, Chris Pal, Aaron Courville
  • Learning to Embed Words in Context for Syntactic Tasks
    Lifu Tu, Kevin Gimpel, Karen Livescu

Short Papers

  • Learning Bilingual Projections of Embeddings for Vocabulary Expansion in Machine Translation 
    Pranava Swaroop Madhyastha and Cristina España-Bonet
  • Learning when to skim and when to read
    Alexander Johansen and Richard Socher
  • Sequential Attention: A Context-Aware Alignment Function for Machine Reading
    Sebastian Brarda, Philip Yeres, Samuel Bowman
  • Knowledge Base Completion: Baselines Strike Back
    Rudolf Kadlec, Ondrej Bajgar, Jan Kleindienst
  • Deep Active Learning for Named Entity Recognition
    Yanyao Shen, Hyokun Yun, Zachary Lipton, Yakov Kronrod, Animashree Anandkumar
  • Plan, Attend, Generate: Character-Level Neural Machine Translation with Planning
    Caglar Gulcehre, Francis Dutil, Adam Trischler, Yoshua Bengio
  • Improving Language Modeling using Densely Connected Recurrent Neural Networks
    Fréderic Godin, Joni Dambre, Wesley De Neve
  • Towards Harnessing Memory Networks for Coreference Resolution
    Joe Cheri and Pushpak Bhattacharyya
  • Does the Geometry of Word Embeddings Help Document Classification? A Case Study on Persistent Homology-Based Representations
    Paul Michel, Abhilasha Ravichander, Shruti Rijhwani
  • Combining Word-Level and Character-Level Representations for Relation Classification of Informal Text
    Dongyun Liang, Weiran Xu, Yinge Zhao
  • Context encoders as a simple but powerful extension of word2vec
    Franziska Horn

Extended Abstracts

  • Using millions of emoji occurrences to pretrain any-domain models for detecting emotion, sentiment and sarcasm
    Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, Sune Lehmann
  • The Coadaptation Problem when Learning How and What to Compose
    Andrew Drozdov and Samuel Bowman
  • Evaluating Layers of Representation in Neural Machine Translation on Syntactic and Semantic Tagging
    Yonatan Belinkov, Lluís Màrquez, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, James Glass
  • Regularized Topic Models for Sparse Interpretable Word Embeddings
    Anna Potapenko and Artem Popov
  • MUSE: Modularizing Unsupervised Sense Embeddings
    Guang-He Lee and Yun-Nung Chen