CRAC 2019 is the Second Workshop on Computational Models of Reference, Anaphora and Coreference, accepted to NAACL 2019.
Background: The end of Discourse Anaphora and Anaphor Resolution Colloquium series in 2011 scattered the research papers on anaphora/coreference resolution among very different fora until a common event in Computational Linguistics entirely dedicated to this area was revived in 2016 with the Coreference Beyond Ontonotes (CORBON) workshop co-located with NAACL and in 2017 with EACL. In 2018 its focus, perceived as too narrow, was broadened to cover all cases of computational modelling of reference, anaphora, and coreference with CRAC 2018 workshop held at NAACL. Following the recent advances in application of word embeddings and deep neural networks to various NLP tasks, we believe that the task of cross-lingual coreference resolution can also benefit from the new perspective. In 2019 CRAC will have the special theme of “Universal Coreference”, investigating the possibility of developing a unified, language-independent framework for coreference annotation, resolution and evaluation.
Objectives: The aim of the workshop is to provide a forum where work on all aspects of computational work on anaphora resolution and annotation, including both coreference and types of anaphora such as bridging references resolution and discourse deixis, can be presented.
A special theme of the 2019 edition of the workshop is the Universal Coreference framework – a unified markup scheme applicable to multiple languages, reflecting common cross-linguistic understanding of reference-related phenomena. This theme was motivated by the recent successes of the development of Universal Dependencies. As with Universal Dependencies, the Universal Coreference framework aims to facilitate referential analysis of similarities and idiosyncracies among typologically different languages, support comparative evaluation of coreference resolution systems and enable comparative linguistic studies. The workshop organizers will develop and implement an initial version of the framework and organize a shared task around it as part of CRAC 2019. In addition, the workshop will include a panel discussion on possible improvements of the initial framework based on the outcomes of the shared task.
Topics: The workshop will welcome submissions describing both theoretical and applied computational work on anaphora / coreference resolution, including on languages other than English, and less-researched types of anaphora such as bridging references.
Topics of interest include but are not limited to the following:
- Coreference resolution for less-researched languages
- Annotation and interpretation of anaphoric relations, including relations other than identity coreference (e.g., bridging references, reference to abstract entities)
- Investigation of difficult cases of anaphora / coreference and their resolution
- Anaphora / coreference resolution in noisy data (e.g. in speech, social media)
- New applications of coreference resolution
- Universal Coreference
1 day (2 invited talks, one panel, oral and poster presentations, shared task)
Universal Coreference panel: Background and goals
Currently referential relations are most frequently analyzed in isolation with each language or annotation project proposing its own annotation model, usually incompatible with existing ones. With many incompatible schemata available it is difficult to compare quality of tools available for different languages, develop and maintain multilingual systems and perform cross-lingual analyses. Previous attempts of annotation uniformization (e.g. based on OntoNotes corpus) were hardly adopted, most likely due to complexity of the model and resulting portability issues. In light of this problem, the workshop will feature a panel organized around an initial Universal Coreference framework developed by the workshop organizers. Specifically, it will focus on the prospects of cross-lingual decoding of Universal Coreference in a set of corpora in various languages, featuring uniform annotation of nominal direct coreference. The proposed framework is intended to provide much wider coverage of languages than previous tasks on multilingual coreference resolution such as CoNLL-2012 Shared Task on Modeling Multilingual Unrestricted Coreference in OntoNotes without the restriction of having parallel data. The framework can be extended in a systematic manner, by releasing new language corpora and new versions of annotation guidelines. At the same time it does not prohibit language-specific extensions built on top of the common layer.
Tentative Program Committee
- Anders Bjorkelund, University of Stuttgart
- Antonio Branco, University of Lisbon
- Dan Cristea, A. I. Cuza University of Iasi
- Sobha Lalitha Devi, AU-KBC Research Center, Anna University of Chennai
- Stephanie Dipper, University of Bochum
- Yulia Grishina, Amazon (confirmed)
- Veronique Hoste, Ghent University (confirmed)
- Ryu Iida, National Institute of Information and Communications Technology (NICT), Japan (confirmed)
- Varada Kolhatkar, Simon Fraser University
- Emmanuel Lassalle, Global Systematic Investors LLP, UK
- Chris Manning, Stanford University
- Katja Markert, Heidelberg University (confirmed)
- Sebastian Martschat, Heidelberg University
- Ruslan Mitkov, University of Wolverhampton
- Costanza Navaretta, University of Copenhagen (confirmed)
- Anna Nedoluzhko, Charles University in Prague
- Michal Novak, Charles University in Prague
- Maciej Ogrodniczuk, Institute of Computer Science, Polish Academy of Sciences (confirmed)
- Constantin Orasan, University of Wolverhampton (confirmed)
- Massimo Poesio, Queen Mary University of London (confirmed)
- Sameer Pradhan, cemantix.org and Boulder Learning Inc.
- Marta Recasens, Google Inc.
- Dan Roth, University of Pennsylvania
- Veselin Stoyanov, Facebook
- Yannick Versley, IBM (confirmed)
- Sam Wiseman, Harvard University
- Heike Zinsmeister, University of Hamburg (confirmed)
- Maciej Ogrodniczuk, Institute of Computer Science, Polish Academy of Sciences
- Sameer Pradhan, cemantix.org and Boulder Learning Inc.
- Yulia Grishina, Amazon
- Vincent Ng, University of Texas at Dallas