Full Workshop Proceedings Best PapersLong Papers- Learning Word Importance with the Neural Bag-of-Words Model
Imran Sheikh, Irina Illina, Dominique Fohr and Georges Linares - Why "Blow Out"? A Structural Analysis of the Movie Dialog Dataset
Richard Searle and Megan Bingham-Walker - Pair Distance Distribution: A Model of Semantic Representation
Yonatan Ramni, Oded Maimon and Evgeni Khmelnitsky - Towards Abstraction from Extraction: Multiple Timescale Gated Recurrent Unit for Summarization
Minsoo Kim, Dennis Singh Moirangthem and Minho Lee - An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation
Jey Han Lau and Timothy Baldwin - Parameterized Context Windows in Random Indexing
Tobias Norlund, David Nilsson and Magnus Sahlgren - Functional Distributional Semantics
Guy Emerson and Ann Copestake - Adjusting Word Embeddings with Semantic Intensity Orders
Joo-Kyung Kim, Marie-Catherine de Marneffe and Eric Fosler-Lussier - Sparsifying Word Representations for Deep Unordered Sentence Modeling
Prasanna Sattigeri and Jayaraman J. Thiagarajan - Quantifying the Vanishing Gradient and Long Distance Dependency Problem in Recursive Neural Networks and Recursive LSTMs
Phong Le and Willem Zuidema - Assisting Discussion Forum Users using Deep Recurrent Neural Networks
Jacob Hagstedt P Suorra and Olof Mogren - Learning Text Similarity with Siamese Recurrent Networks
Paul Neculoiu, Maarten Versteegh and Mihai Rotaru - Measuring Semantic Similarity of Words Using Concept Networks
Gábor Recski, Eszter Iklódi, Katalin Pajkossy and Andras Kornai - Neural Associative Memory for Dual-Sequence Modeling
Dirk Weissenborn - Towards Cross-Lingual Distributed Representations without Parallel Text Trained with Adversarial Autoencoders
Antonio Valerio Miceli Barone - Multilingual Modal Sense Classification using a Convolutional Neural Network
Ana Marasović and Anette Frank - Joint Learning of Sentence Embeddings for Relevance and Entailment
Petr Baudiš, Silvestr Stanko and Jan Šedivý - Learning Semantic Relatedness in Community Question Answering Using Neural Models
Henry Nassif, Mitra Mohtarami and James Glass - A Two-Stage Approach for Extending Event Detection to New Types via Neural Networks
Thien Huu Nguyen, Lisheng Fu, Kyunghyun Cho and Ralph Grishman - A Vector Model for Type-Theoretical Semantics
Konstantin Sokolov - Decomposing Bilexical Dependencies into Semantic and Syntactic Vectors
Jeff Mitchell - LSTM-Based Mixture-of-Experts for Knowledge-Aware Dialogues
Phong Le, Marc Dymetman and Jean-Michel Renders - Using Embedding Masks for Word Categorization
Stefan Ruseti, Traian Rebedea and Stefan Trausan-Matu - On the Compositionality and Semantic Interpretation of English Noun Compounds
Corina Dima - Explaining Predictions of Non-Linear Classifiers in NLP
Leila Arras, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller and Wojciech Samek - Towards Generalizable Sentence Embeddings
Eleni Triantafillou, Jamie Ryan Kiros, Raquel Urtasun and Richard Zemel - A Joint Model for Word Embedding and Word Morphology
Kris Cao and Marek Rei
Extended Abstracts- MuFuRU: The Multi-Function Recurrent Unit
Dirk Weissenborn and Tim Rocktäschel - Decoding Neural Activity Patterns Associated with Sentences by Combining Experiential Attribute and Text-Based Semantic Models
Andrew Anderson, Jeffrey Binder, Leonardo Fernandino, Colin Humphries, Lisa Conant, Katrin Erk and Rajeev Raizada - Distilling Word Embeddings: An Encoding Approach
Lili Mou, Ran Jia, Yan Xu, Ge Li, Lu Zhang and Zhi Jin - Learning Phone Embeddings for Word Segmentation of Child-Directed Speech
Jianqiang Ma, Çağrı Çöltekin and Erhard Hinrichs - Learning Word Representations from Multiple Information Sources
Yunchuan Chen, Lili Mou, Yan Xu, Ge Li and Zhi Jin - Improving Preposition Sense Disambiguation with Representations Learned from Mutilingual Data
Hila Gonen and Yoav Goldberg - Combining String Kernels and Gaussian Processes for Richer Text Representations
Daniel Beck
|
|