A single $750 prize will be awarded to the top-scoring submission on each dataset (CrowdFlower's and Google's). In the case of ties, prize money will be divided evenly among those tied.
Prize money: Google is generously providing $1500 in prize funds to be awarded to the top-scoring participants. NOTE: to be eligible for prize money, you must 1) submit a top-scoring submission; 2) submit a paper by the deadline describing your system; *AND* 3) register for and attend the workshop.
Overview: The goals of CrowdScale 2013's Shared Task Challenge are to:
Datasets. To help advance research on crowdsourcing at scale, CrowdFlower and Google are providing two new challenge datasets:
"basic data" consists simply of three comma-separated value (CSV) columns:
"full data" provides more details about the regarding the structure and content of the questions, keyed by the same question ID. Note that the full data also includes the basic data; we have separated them above just to make it easier for people to get started. While one can participate completely in the shared task using only the basic data, we encourage participants to use additional information in the full data to achieve higher quality.
"ground truth" provides the correct answers for a small, random sample of the questions. Participants are encouraged to tune their methods on this data before submitting their final answers. Ground truth questions will not be used in the final evaluation.
Evaluation Metric: answer quality will be scored by average recall (over class categories) for awarding prize money. Additional metrics (e.g., simple accuracy) will be reported for analysis but will not affect prize money distribution. For transparency, download our Matlab evaluation script (and let us know if you have any comments or corrections).
Submitting Results: result submissions consist simply of two CSV columns:
Each participating group may submit a single submission for official evaluation. Those collaborating may only submit once together, and not make separate submissions individually. The result file for each task (sentiment analysis and fact evaluation) should be compressed (.zip, .gz, or .bz2) and emailed to the organizers.
Submitting papers: Participants are expected to submit a paper describing their methods and *preliminary* results based on the released "ground truth" sample. Final results will be announced at the workshop. See the Call for Papers for additional details on paper format.
Questions? Contact the organizers.
Related Shared Tasks