Call for Papers


This workshop will bring together NLP researchers whose work deals with relational aspects of language understanding. The ability to reason about semantic relations is a fundamental linguistic competence: it is through recognising explicit and implicit relations between entities and events that humans (and machines) can form a coherent representation of a text's meaning. Numerous recent workshops have focused on lexical semantics; RELMS-11 will highlight relational semantics.

The modeling of semantic relations has been considered from many angles, across a variety of tasks and sub-disciplines. In ontology learning and information extraction, the focus is on learning "encyclopaedic" relations between entities in the domain of discourse. In structured prediction tasks such as semantic role labeling or biomedical event extraction, systems must reason about the relational content of a text, about which entities and events enter into which mutual relations. The interpretation of compound nouns requires reasoning about probable and plausible relations between two entities, with limited knowledge of context. Some sources of textual information are inherently relational -- for example, content in on-line social networks -- so computational models can benefit from reasoning explicitly about relational structures. There is also much to gain from understanding the connections between NLP tasks in which semantic relations play a key role. Methods which work for one task tend to generalize to others, and semantic relations tend to interact in interesting ways.

Researchers primarily working on specific modeling contexts stand to gain from understanding the connections between the various NLP tasks in which semantic relations play a key role. As well as considering whether methods used for one task may generalize to others, a key question is how different kinds of semantic relations interact. For example, encyclopedic world knowledge may be of use for "guiding" structured prediction; this might be particularly useful in impoverished contexts such as compound noun interpretation and "implicit" semantic role labeling. Conversely, encyclopedic relation learning can be viewed as generalising over instance-level relational analyses. Exploring these connections will be an important theme of the workshop.


Topics of interest include but are not restricted to the following:
  • classification of semantic relations in text, for example in the framework of SemEval-2 Tasks 8 and 9 or TempEval;
  • semantic structured prediction: semantic role labeling, event extraction;
  • semantic applications of statistical relational learning (Markov Logic, Inductive Logic Programming, and so on);
  • joint modeling of heterogeneous semantic relations, connections between traditionally distinct relational modeling tasks;
  • relational information extraction and ontology learning;
  • compound noun interpretation and retrieval of implicit semantic relations;
  • annotation and evaluation issues relating to semantic relations;
  • domain-specific aspects of relation learning.
(with thanks to Rob Cottingham)