The overall aim for GREC-Full Task is to improve the referential clarity and fluency of input texts. Systems should replace REs as and where necessary to produce as clear, fluent and coherent a text as possible. This task could be viewed as composed of three sub-tasks: (1) named entity recognition (as in GREC-NER); (2) a conversion tool to give lists of possible REs for each entity; and (3) named entity generation (as in GREC-NEG).
Inputs are as described for GREC-NER, and outputs as for GREC-NEG.
[GREC-NER/Full'10 training/development data]
We provide a tool which computes (i) BLEU-3; (ii) NIST; (iii) string-edit distance; and (iv) length-normalised string-edit distance. The human-assessed evaluation methods are preference-strength judgements using sliders for assessing Fluency and Referential Clarity.
W10-4226: Anja Belz; Eric Kow
The GREC Challenges 2010: Overview and Evaluation Results
W10-4232: Nicole Sparks; Charles Greenbacker; Kathleen McCoy; Che-Yu Kuo
UDel: Named Entity Recognition and Reference Regeneration from Surface Text