Natural Language Engineering - Special issue

Informing Neural Architectures for NLP with Linguistic and background Knowledge

This page is dedicated to a special issue of the Natural Language Engineering journal on informing neural architectures for NLP with linguistic and background knowledge.

There has been a huge amount of research on the use of deep neural architectures for Natural Language Processing (NLP). Namely, in the last years (i.e. starting at a constant rate from 2010) the proceedings from all major conferences in Artificial Intelligence and Computational Linguistics, including AAAI, IJCAI, ACL, NAACL, NIPS, and ICLR to name a few, included a substantial amount of contributions on deep neural networks applied to NLP. Despite the fact that automatic representation learning, as opposed to manual feature engineering, has become the de facto standard methodological framework, linguistic knowledge - encoded informally in the form of human expertise and intuitions about the language, but also formally in large symbolic linguistic resources - has not become obsolete. It is still an invaluable source of knowledge required by most modern NLP technologies to reach peak performance. This special issue aims to collect state-of-the-art contributions to the development and use of linguistic and background knowledge for neural architectures in NLP, such as task-specific objective functions that are informed on the basis of linguistic knowledge or using linguistic resources like lexical knowledge bases and multilingual dictionaries to generate training data for neural architectures, as well as specialize and improve text representations.

One prime examples of the line of work we would like to explore in this special issue is the paper about retrofitting word vectors to semantic lexicons, which was the first to explore the use of symbolic semantic lexical resources to improve word representations, is among the most cited of NAACL 2015 (according to Google Scholar as of October 2017). Besides, in his plenary invited talk at ACL 2017 on "Squashing Computational Linguistics'', Noah Smith argued for the need of language-appropriate inductive biases for representation learning models of language though assumptions baked into a model, constraints on an inference algorithm, or linguistic analysis applied to data.

The deadline for submissions has been extended to November 15, 2018 (due to overlap with EMNLP-18).

The goal of the special issue is to find new forms of "symbiosis" between neural networks and symbolic knowledge resources.

This picture from Wikipedia illustrates a symbiotic mutualistic relationship of the clownfish and the sea anemone and hopefully can inspire you during the submission.