Call for Papers

Already accepting submissions: https://www.softconf.com/acl2018/RELNLP/

We adopt the ACL guidelines for publication regarding arxiv and preprint versions of papers. See ACL policy.

Call for long and short papers!

Long papers may consist of up to eight (8) pages of content, plus unlimited references; final versions of long papers will be given one additional page of content (up to 9 pages) so that reviewers’ comments can be taken into account.

Short papers may consist of up to four (4) pages of content, plus unlimited references. Upon acceptance, short papers will be given five (5) content pages in the proceedings.

Follow ACL 2018 template: http://acl2018.org/call-for-papers/

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------

There is a long standing tradition in NLP focusing on fundamental language modeling tasks such as morphological analysis, POS tagging, parsing, WSD or semantic parsing. In the context of end-user NLP tasks, these have played the role of enabling technologies, providing a layer of representation upon which more complex tasks can be built.

However, in recent years we have witnessed a number of success stories for tasks ranging from information extraction or text comprehension to machine translation, for which the use of embeddings and neural networks has driven state of the art results to new levels. More importantly, these are often end-to-end architectures trained on large amounts of data and making little or no use of a linguistically-informed language representation layer. For example, the modeling of word senses and word sense disambiguation are implicit in the functional composition of word embeddings. Other topics such as linear sentence processing versus syntactic parses or frequency-based word segmentation versus morphological analysis are still up for debate.

This workshop focuses on the role of linguistic structures in the neural network era. We aim at addressing the following topics:

  • What is the role of explicit linguistic structure in NLP models? Can it further improve state of the art? Is this needed in transferring the technology to new domains, new languages, new tasks or for other limited labeled data scenarios?
  • Do neural networks implicitly learn aspects of language, whether similar to linguistic structure or not, that can be shared across tasks and serve as core language representation layers?

We welcome the submission of novel papers and position papers on the following general topics:

  • Multi-task learning.
  • Transfer learning.
  • Explicit linguistic representation into existing neural architectures.
  • What does my neural net learn about linguistics, and how do I know it?