CRAC 2024

7th Workshop on Computational   Models of Reference, Anaphora and Coreference

CRAC 2024, the Seventh Workshop on Computational Models of Reference, Anaphora and Coreference, was held at EMNLP 2024 (in Miami, Florida, on November 15).

About the workshop series

Since 2016, the yearly CRAC (and its predecessor, CORBON) workshop has become the primary forum for researchers interested in the computational modeling of reference, anaphora, and coreference to discuss and publish their results. Over the years, this workshop series has successfully organized five shared tasks, which stimulated interest in new problems in this area of research, facilitated the discussion and dissemination of results on new problems/directions (e.g., multimodal reference resolution), and helped expand the coreference community that used to be dominated by European researchers to include young researchers from the Americas.

The aim of the workshop is to provide a forum where work on all aspects of computational work on anaphora resolution and annotation can be presented.

Topics

The workshop welcomes submissions describing theoretical and applied computational work on anaphora/coreference resolution. Topics of interest include but are not limited to:

CRAC 2024 Shared Task on Multilingual Coreference Resolution

CRAC 2024 also featured presentation of the results of the Shared Task on Multilingual Coreference Resolution. Please find the list of accepted Shared Task papers and the program of the Shared Task session below.

Important Dates

Accepted Papers

Long papers

Short papers

Shared Task papers

Invited Talk

Jackie Chi Kit Cheung: Reference at the Heart of Natural Language Processing

Natural language is traditionally framed as a mapping from form to content, with reference being the connection between the two. Yet curiously, large language models have achieved impressive levels of performance and adoption through training on distributional signals, which concerns form alone. In this talk, I argue for the importance of reference and coreference in NLP, and discuss topics in NLP which are touched by these phenomena, including model "hallucinations" and factual errors, knowledge updating, common sense reasoning, and conversational agents. I discuss how existing evaluation practices based on large-scale benchmarking often masks the importance of reference-related phenomena, and present work from my lab that reflects on current evaluation practices and their validity. I call for more serious consideration of reference including targeted evaluation of reference-related phenomena as a necessary step towards achieving robust NLP systems.

Jackie Chi Kit Cheung is an associate professor at McGill University's School of Computer Science, where he co-directs the Reasoning and Learning Lab. He is a Canada CIFAR AI Chair and an Associate Scientific Co-Director at the Mila Quebec AI Institute. His research focuses on topics in natural language generation such as automatic summarization, and on integrating diverse knowledge sources into NLP systems for pragmatic and common-sense reasoning. He also works on applications of NLP to domains such as education, health, and language revitalization. He is motivated by how the structure of the world can be reflected in the structure of language processing systems. He is a consulting researcher at Microsoft Research Montreal.

Workshop schedule (November 15)

Opening remarks

Invited talk

Findings paper session

EMNLP 2024 paper

Research paper session

Shared task paper session

Panel discussion

Closing remarks

Program Committee

Organizing Committee