This site has been created to support the submission of Cheap and Fast -- But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks to EMNLP 2008.
- All Collected Data: all_collected_data.tgz
- Details: there is one file for each task, and each file contains a TSV file; each row corresponds to a particular annotation; the columns correspond to the AMT HIT ID, the AMT Worker ID, the ID of the data example, the worker label, and the gold standard label).
- Sample Annotation Tasks:
- Affective Text: affect_sample.html
- Word Similarity: wordsim_sample.html
- Recognizing Textual Entailment: rte_sample.html
- Temporal Ordering: temp_sample.html
- Word Sense Disambiguation: wsd_sample.html