The GREC-NER task is a straightforward combined named-entity recognition and coreference resolution task, restricted to people entities. Systems insert REF and REFEX tags with coreference IDs around recognised mentions. The aim is to match the ‘gold-standard’ tags in the GREC-People data.
The GREC-NER training and development data come in two versions. The first is identical to the format described above (containing information about correct system outputs). The second is the test data input format, where texts do not have REFEXs, REFs, or ALT-REFEXs. Moreover, a proportion of REFEXs have been replaced with standardised named references. System outputs have the same format as test data inputs, plus ALT-REFEX and REFEX tags inserted around recognised people references.
[GREC-NER/Full'10 training/development data]
To measure accuracy in the NER task, the GREC Shared Task organisers provided a wrapper script which applies three commonly used performance measures for coreference resolution: MUC-6, CEAF, and B-CUBED.
W10-4226: Anja Belz; Eric Kow
The GREC Challenges 2010: Overview and Evaluation Results
W10-4228: Éric Charton; Michel Gagnon; Benoit Ozell
Poly-co: An Unsupervised Co-reference Detection System
W10-4232: Nicole Sparks; Charles Greenbacker; Kathleen McCoy; Che-Yu Kuo
UDel: Named Entity Recognition and Reference Regeneration from Surface Text