The modeling of semantic relations has been considered from many angles, across a variety of tasks and sub-disciplines. In ontology learning and information extraction, the focus is on learning "encyclopaedic" relations between entities in the domain of discourse. In structured prediction tasks such as semantic role labeling or biomedical event extraction, systems must reason about the relational content of a text, about which entities and events enter into which mutual relations. The interpretation of compound nouns requires reasoning about probable and plausible relations between two entities, with limited knowledge of context. Some sources of textual information are inherently relational -- for example, content in on-line social networks -- so computational models can benefit from reasoning explicitly about relational structures. There is also much to gain from understanding the connections between NLP tasks in which semantic relations play a key role. Methods which work for one task tend to generalize to others, and semantic relations tend to interact in interesting ways.
Researchers primarily working on specific modeling contexts stand to gain from understanding the connections between the various NLP tasks in which semantic relations play a key role. As well as considering whether methods used for one task may generalize to others, a key question is how different kinds of semantic relations interact. For example, encyclopedic world knowledge may be of use for "guiding" structured prediction; this might be particularly useful in impoverished contexts such as compound noun interpretation and "implicit" semantic role labeling. Conversely, encyclopedic relation learning can be viewed as generalising over instance-level relational analyses. Exploring these connections will be an important theme of the workshop.
(with thanks to Rob Cottingham)