NLP Annotations
This site has been created to support the Cheap and Fast -- But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks, as published in the Proceedings of EMNLP 2008.
- All collected data with original task text: snow2008_mturk_data_with_orig_files_assembled_201904.zip
- See enclosed README.txt for description.
- All collected data: all_collected_data.tgz
- Details: there is one file for each task, and each file contains a TSV file; each row corresponds to a particular annotation; the columns correspond to the AMT HIT ID, the AMT Worker ID, the ID of the data example, the worker label, and the gold standard label).
- Sample annotation tasks:
- Affective Text: affect_sample.html
- Word Similarity: wordsim_sample.html
- Recognizing Textual Entailment: rte_sample.html
- Temporal Ordering: temp_sample.html
- Word Sense Disambiguation: wsd_sample.html