Shared Annotation Task

### UPDATE ###


- Instructions for the annotation exercise session can be downloaded here (or at the end of the page)

- Materials for the annotation exercise session can be downloaded here (or at the end of the page)

NOTE: Download BRAT: Download brat v1.3 (Crunchy Frog)



The organisers are planning to provide a set of news articles from different sources, stretching over a period of time and focused on a specific topic and will ask the participants to provide their own annotations, interpretations, and analyses of this dataset. We will collect these analyses before the workshop and summarise them to facilitate an insightful comparison. We will ask for clear documentation to enable meaningful comparisons. Furthermore, we will ask participants who have systems and tools for extracting events and/or storylines to run their systems on this common dataset. These results will be compared to the annotated data (only indirect comparisons would be possible). The results of the combined manual and automatic annotation of the common dataset will be used to drive the discussions around three themes:


  • Definitions: What is an event? What is a storyline? How can it be formally and computationally formulated? which properties of events contribute to the identification and extraction of storylines? How do complex event structures contribute to storylines?
  • Resources: How to annotate event mentions? What are the core markables of a storyline? Can existing annotation schemes be re-used and adapted for storyline annotation? How should we annotate cross-document information for events and character perspectives? What kind of Language Resources for storylines and events are necessary? Can “cause"  and “precondition" be encoded in Language Resources? How do they contribute to storylines?
  • Evaluation: How do we evaluate extracted events? How do we determine if an extracted storyline is “good enough”? Can standard measures, such as Precision, Recall and F-measure, be applied to evaluate event and storyline extraction or do we need different measures? Should evaluation take place at a global level or must it be conducted separately on the different components of an event or storyline system? How can we evaluate complex event structures?

Suggestions for annotation tasks (feel free to explore other tasks!!):
  • event-event sequence links;
  • event coreference links (full identity)
  • sub-event links (from macro-events to micro-events mentions)
  • agent, patient, location links (from events to their participants)
  • identify a story, and assign to it a Moral/Point/Theme
Available dataset:
  • ECB+ Corpus (news): pre-annotated for event mentions, temporal expressions, locations, event participants, and event coreference (in- and cross-document).
  • Event-Event Relation Corpus (news): pre-annotated for event mentions, and relations (check paper for details).
  • Sherlock Holmes Texts (fictional text): texts of two stories, tokenized, pos-tagged and annotated with extended wordnet senses.









ċ
annotation_exercise_eventstory.zip
(16k)
Tommaso Caselli,
Jul 30, 2017, 6:15 PM
Ċ
Tommaso Caselli,
Jul 30, 2017, 6:21 PM
Comments