Creating Speech and Language Data With Amazon’s Mechanical Turk

Note: We welcome papers on ALL crowd source tools (Amazon Mechanical Turk, CrowdFlower, Herd It, etc.)

Amazon's Mechanical Turk is an online marketplace for work that allows you to pay people small sums of money to do "Human Intelligence Tasks" or HITs. Tasks include anything from labeling images, to listening to short pieces of audio, to researching topics on the internet, to scrubbing database records.

A number of recent papers have evaluated the effectiveness of using Mechanical Turk to create annotated data for natural language processing applications. Mechanical Turk's low cost, scalable workforce opens new possibilities for annotating speech and text, and has the potential to dramatically change how we create data for human language technologies. Open questions include:

    • How can we ensure high quality annotations?

    • What tools are available for obtaining complex annotations?

    • What types of annotations and evaluations are possible when the cost is dramatically reduced?

This work will explore uses of Mechanical Turk in several ways:

    • Shared task: What can you do with $100 and Mechanical Turk? Participants will be given a budget to spend on Mechanical Turk and submit papers describing the results of their experience.

    • General papers: These papers will explore general issues with using Mechanical Turk for language processing research.

Discussion Group

We have set up a discussion group for workshop participants (and other interested researchers) to talk about challenges surrounding using Mechanical Turk for NLP. You can sign up for the e-mail list or read previous discussions on the Google Groups page for the workshop.

Important Dates

Shared Task: What can you do with $100 and Amazon Mechanical Turk?

The shared task is generously sponsored by Amazon Mechanical Turk who is providing $100 credits to teams, and by Crowd Flower who is offering free use of their software to workshop participants. Note that Crowd Flower allows participation from groups that are not based in the USA.

Our shared task will attempt to answer this question. Each shared task team will be given $100 credit on Amazon Mechanical Turk to spend on an annotation task. Each team will submit a short paper describing their experience, and will distribute the data that they created. The data will be made available on the workshop website. Participants may add their own funds as desired.

Shared task papers can address the following:

    • Were experts as good as non-experts?

    • What is the cost per label?

    • How did you convey the task in simple enough terms for non-experts?

    • What did you do to ensure quality?

    • How quickly did the data get annotated?

To receive $100 towards participation in the shared task, a team must submit a proposal (1 page max) describing their intended experiments and expected outcomes by January 31, 2010. The organizers will award participants funding based on the proposals merit and funding availability. Note that PROPOSALS to the shared task are NOT blind, while ALL PAPERS submitted to the workshop are blind and must be anonymized.

For the shared task effort some participants may choose to use data with restricted distribution licenses. We will allow proposals to use such data provided that it is from a commonly available corpus, such as those distributed by the Linguistics Data Consortium (LDC), TREC, etc. The annotations must be distributed in such a way that participants who obtain the original corpus can merge in the annotations correctly. Please indicate in your proposal what data you will annotate and any associated licensing issues.

Teams which do not accept workshop funding for the shared task are welcome to submit papers on this topic as well. These papers should be submitted to the research track.

Deadline for Shared Task Proposals: January 31, 2010

General Papers

We solicit general papers on the following topics:

    • Position papers about the use of Mechanical Turk in language research based on previous experiences.

    • Toolkits and mechanism for obtaining language annotations from mechanical turk.

    • Creative uses of Mechanical Turk in research.

    • Guides for using Mechanical Turk in academic and industrial settings.

    • Tutorial on using Mechanical Turk.

Submission Instructions

Authors are invited to submit papers on original, unpublished work. Papers will be evaluated using BLIND REVIEW by at least by two members of the program committee. All papers should be anonymized for review. Accepted papers will be published in the workshop proceedings.

Submissions should be formatted using the NAACL 2010 style available here: http://naaclhlt2010.isi.edu/authors.html.

Paper length: Both general papers and shared task papers may be 4 or 8 pages.

The PDF files will be submitted electronically through the NAACL submission system: https://www.softconf.com/naaclhlt2010/mechanicalturk/

Organizing Committee

Chris Callison-Burch (Johns Hopkins University)

Mark Dredze (Johns Hopkins University)

Program Committee

Breck Baldwin (Alias-i)

Jordan Boyd-Graber (UMD)

Michael Bloodgood (HLTCOE)

Bob Carpenter (Alias-i)

David Chen (UT Austin)

Maxine Eskenazi (CMU)

Nikesh Garera (Kosmix)

Jim Glass (MIT)

Alex Gruenstein (Google)

Janna Hamaker (Amazon)

Jon Hamaker (Microsoft)

Samer Hassan (University of North Texas)

Benjamin Lambert (CMU)

Ben Leong (University of North Texas)

Alexandre Klementiev (JHU)

Ian McGraw (MIT)

Scott Novotney (JHU)

Brendan O'Connor (CMU)

Gabriel Parent (CMU)

Massimo Poesio (University of Essex)

Joe Polifroni (Nokia Research Labs)

Joseph Reisinger (UT Austin)

Ted Sandler (Amazon)

Stephanie Seneff (MIT)

Kevin Small (Tufts)

Rion Snow (Stanford / Twitter)

We thank Amazon Mechanical Turk and Crowd Flower for supporting the shared task.