Learning Semantics 2014 Montréal, Friday December 12th /səˈmantiks/ Welcome to the NIPS 2014 workshop on Learning Semantics! |
Overview
Understanding the semantic structure of unstructured data -- text, dialogs, images -- is a critical challenge given their central role in many applications, including question answering, dialog systems, information retrieval... In recent years, there has been much interest in designing models and algorithms to automatically extract and manipulate these semantic representations from raw data.
Semantics is a diverse field. It encompasses extracting structured data from text and dialog data (knowledge base extraction, logical form extraction, information extraction), linguistic approaches to extract and compose representation of meaning, inference and reasoning over meaning representation based on logic or algebra. It also includes approaches that aims at grounding language by learning relations between language and visual observations, linking language to the physical world (e.g. through robotics, machine commands). Despite spanning different disciplines with seemingly incompatible views, these approaches to semantics all aims at enabling computers to evolve and interact with humans and the physical world in general.
The goal of the workshop is dual. First, we aim at gathering experts from the different fields of semantics to favor cross-fertilization, discussions and constructive debates. Second, we encourage invited speakers and participants to expose their future research directions, take position and highlight the key challenges the community need to face. The workshop devotes most of the program to panel sessions about future directions.
We welcome contributions (up to 4 pages abstract) in the following areas and related topics:
- Word similarities and sense disambiguation
- Information and relation extraction
- Lexical and compositional semantics
- Learning semantic frames and semantic role labelling
- Grounded language learning
- Semantic representation for dialog understanding
- Visual scene understanding
- Multi-modal semantic representation and reasoning
Submit your abstract through CMT:
Submission Deadline: Friday October 17th @ 4:59 PM EDT (UTC-4, Montreal Time)Author Notification: Friday November 7thCamera Ready Due: Friday November 14th- Workshop Day: Friday December 12th
Morning
Machine Reasoning & Artificial Intelligence
- 08:30a Pedro Domingos, University of Washington, Symmetry-Based Semantic Parsing
- 08:50a Tomas Mikolov, Facebook, Challenges in Development of Machine Intelligence
- 09:10a Luke Zettlemoyer, University of Washington, Semantic Parsing for Knowledge Extraction
- 09:30a Panel Discussion
Contributed Posters
- 10:00a Contributed Posters, Coffee Break
Natural Language Processing & Semantics from Text Corpora
- 10:30a Stephen Clark, University of Cambridge, Composition in Distributed Semantics
- 10:50a Sebastian Riedel, University College London, Embedding Probabilistic Logic for Machine Reading
- 11:10a Ivan Titov, University of Amsterdam, Inducing Semantics Frames and Roles from Text in a Reconstruction-Error Minimization Framework
- 11:30a Panel Discussion
Afternoon
Personal Assistants, Dialog Systems, and Question Answering
- 03:00p Susan Hendrich, Microsoft Cortana
- 03:20p Ashutosh Saxena, Cornell, Tell Me Dave: Context-Sensitive Grounding of Natural Language into Robotic Tasks
- 03:40p Jason Weston, Facebook, Memory Networks
- 04:00p Panel Discussion
Contributed Posters
- 04:30p Contributed Posters, Coffee Break
Reasoning from Visual Scenes
- 05:00p Alyosha Efros, UC Berkeley, Towards The Visual Memex
- 05:20p Jeffrey Siskind, Purdue University, Learning to Ground Sentences in Video
- 05:40p Larry Zitnick, Microsoft Research, Forget Reality: Learning from Visual Abstraction
- 06:00p Panel Discussion
Morning Session (10:00p-10:30p)
- G. Boleda and K. Erk, Distributional semantic features as semantic primitives - or not
- C. Burges, E. Renshaw and A. Pastusiak, Relations World: A Possibilistic Graphical Model
- J. Cheng, D. Kartsaklis and E. Grefenstette, Investigating the Role of Prior Disambiguation in Deep-learning Compositional Models of Meaning
- L. Fagarasan, E. Maria Vecchi and S. Clark, From distributional semantics to feature norms: grounding semantic models in human perceptual data
- F. Hill, K. Cho, S. Jean, C. Devin and Y. Bengio, Not all Neural Embeddings are Born Equal
- M. Iyyer, J. Boyd-Graber and H. Daumé III, Generating Sentences from Semantic Vector Space Representations
- T. Polajnar, L. Rimell and S. Clark, Using Sentence Plausibility to Learn the Semantics of Transitive Verbs
- M. Rabinovich and Z. Ghahramani, Efficient Inference for Unsupervised Semantic Parsing
- S. Ritter, C. Long, D. Paperno, M. Baroni, M. Botvinick and A. Goldberg, Leveraging Preposition Ambiguity to Assess Representation of Semantic Interaction in CDSM
- M. Yu, M. Gormley and M. Dredze, Factor-based Compositional Embedding Models
Afternoon Session (4:10p-5:00p)
- J. M. Hernandez Lobato, J. Lloyd, D. Hernandez-Lobato and Z. Ghahramani, Learning the Semantics of Discrete Random Variables: Ordinal or Categorical?
- S. J. Hwang and L. Sigal, A Unified Semantic Embedding: Relating Taxonomies and Attributes
- Angeliki Lazaridou, Nghia The Pham and Marco Baroni, Combining Language and Vision with a Multimodal Skip-gram Model
- M. Malinowski and M. Fritz, Towards a Visual Turing Challenge
- G. Synnaeve, M. Versteegh and E. Dupoux, Learning Words from Images and Speech
- J. Weston, S. Chopra and A. Bordes, Memory Networks
- R. Xu, J. Lu, C. Xiong, J. Corso, Improving Word Representations via Global Visual Context
- B. Yang, S. Yih, X. He, J. Gao and L. Deng, Learning Multi-Relational Semantics Using Neural-Embedding Models
- Cédric Archambeau (cedrica@1, Amazon)
- Antoine Bordes (abordes@2, Facebook)
- Léon Bottou (leonbo@3, Microsoft)
- Chris Burges (cburges@3, Microsoft)
- David Grangier (grangier@2, Facebook)
NIPS 2011 Workshop on Learning Semantics (A. Bordes and L. Bottou were already organizers of this event).This workshop directly follows the previous workshop:
NIPS 2013: Knowledge Extraction from TextIt is related to the workshops on automated knowledge base construction:
NIPS 2009: Grammar Induction, Representation of Language and Language Learning
NIPS 2008: Speech and Language: Learning-based Methods and Systems
NIPS 2007: The grammar of vision: probabilistic grammar-based models for visual scene understanding and object categorization
Beltagy, I., Chau, C., Boleda, G., Garrette, D., Erk, K., Mooney, R.: Montague Meets Markov: Deep Semantics with Probabilistic Logical Form. Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics (*SEM), June 13-14, 2013
Bordes, A., Glorot, X., Weston, J., Bengio., Y.: Joint learning of words and meaning representations for open-text semantic parsing. Proc. of the 17th Intern. Conf. on Artif. Intel. and Stat (2012)
L. Bottou: From machine learning to machine reasoning: an essay,Machine Learning, 94:133-149 (2014)
Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. Journal of Machine Learning Research 12, 2493–2537 (2011)
Krishnamurthy, J., Mitchell, T.: Vector Space Semantic Parsing: A Framework for Compositional Vector Space Models. Proceedings of the ACL 2013 Workshop on Continuous Vector Space Models and their Compositionality (2013).
Lewis, D.: General semantics. Synthese 22, 18–67 (1970). DOI 10.1007/BF00413598. URL http://dx.doi.org/10.1007/BF00413598
Liang, P., Jordan, M.I., Klein, D.: Learning dependency-based compositional semantics. Association for Computational Linguistics (ACL), pp. 590–599 (2011)
Mitchell, J., Lapata, M.: Vector-based models of semantic composition. Proceedings of ACL-08: HLT pp. 236–244 (2008)
Poon, H., Domingos, P.: Unsupervised ontology induction from text. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pp. 296–305. (2010)
Riedel, S., Yao, L., McCallum, A., Marlin, B.: Relation Extraction with Matrix Factorization and Universal Schemas. Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (2013).
Socher, R., Lin, C.C., Ng, A.Y., Manning, C.D.: Parsing Natural Scenes and Natural Language with Recursive Neural Networks. Proceedings of the 26th International Conference on Machine Learning (ICML) (2011)
Turney, P., Pantel, P.: From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188 (2010)
Zelle, J., Mooney, R.: Learning to parse database queries using inductive logic programming. Proceedings of the National Conference on Artificial Intelligence (1996)
F.M. Zanzotto, L. Ferrone and M. Baroni. When the whole is not greater than the combination of its parts: A decompositional look at compositional distributional semantics. Computational Linguistics (To appear)
Zettlemoyer, L., Collins, M.: Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. Proceedings of the Conference on Uncertainty in Artificial Intelligence (2005)
C. L. Zitnick, D. Parikh and L. Vanderwende: Learning the Visual Interpretation of Sentences, International Conference on Computer Vision (2013)
- We thanks our sponsors Facebook and Microsoft as well as the program committee (Cedric Archambeau, Antoine Bordes, Leon Bottou, Chris Burges, Sumit Chopra, Ronan Collobert, Yunchao Gong, David Grangier, Armand Joulin, Remi Lebret, Tomas Mikolov, Florent Perronin, Hoifung Poon, Marc'Aurelio Ranzato, Matthew Richardson, Sebastian Riedel, Ivan Titov, Jason Weston, Scott Yih) for their help in the review process.