Thanks for doing our HITs! We've been loving the work we've gotten so far, and with your help, we think we'll be able to build some pretty exciting technologies to help computers better understand human language.
Can I have some more examples?
Sure, here are some sentences from our pilot study that we liked.
Is there a right answer?
The task is pretty subjective and it depends on what kinds of background assumptions seem reasonable to you. For some of these pairs there is no a single right answer, but we want you to follow the given instructions and use your own common sense to label them.
Will you reject all of my work?
No. Unless it's clear to us that you are assigning labels across many HITs without even considering the sentences, we won't reject any of your work.
Where do these sentences come from?
These sentences were taken from Wikipedia articles.
When do you approve HITs?
We'll try to approve as early as we can, but we can't promise to look at the data directly any more often than once a day.
(For HITs with bonus) When do you send bonuses?
Since there is another validation phase, we can't send bonuses right away, but we promise to send them as soon as possible. We expect to send bonuses at maximum one week after all the HITs are completed.
When should I choose the 'I don't understand' option?
You should choose this if you can't complete the HIT, and not otherwise. This should be either if the sentence is total nonsense (you can't even guess what it's describing), or if HIT interface is partially broken (no sentence, for example). If there is a typo in a sentence, but you think you know what it means anyway, please don't choose this option.
Who are you?
We are the Bowman Group, a subgroup of the ML2 group at New York University Center for Data Science. We are also affiliated with the NYU Departments of Computer Science and Linguistics.
I have more questions!
Email us through MTurk!
Thanks for doing our HITs! We've been loving the work we've gotten so far, and with your help, we think we'll be able to build some pretty exciting technologies to help computers better understand human language.
Can I have some more examples?
Sure, here are some sentences from our pilot study that we liked.
Is there a right answer?
The task is pretty subjective and it depends on what kinds of background assumptions seem reasonable to you. For some of these pairs there is no a single right answer, but we want you to follow the given instructions and use your own common sense to label them.
Will you reject all of my work?
No. Unless it's clear to us that you are assigning labels across many HITs without even considering the sentences, we won't reject any of your work.
Where do these sentences come from?
These sentences were taken from Wikipedia articles.
When do you approve HITs?
We'll try to approve as early as we can, but we can't promise to look at the data directly any more often than once a day.
(For HITs with bonus) When do you send bonuses?
Since there is another validation phase, we can't send bonuses right away, but we promise to send them as soon as possible. We expect to send bonuses at maximum one week after all the HITs are completed.
When should I choose the 'I don't understand' option?
You should choose this if you can't complete the HIT, and not otherwise. This should be either if the sentence is total nonsense (you can't even guess what it's describing), or if HIT interface is partially broken (no sentence, for example). If there is a typo in a sentence, but you think you know what it means anyway, please don't choose this option.
Who are you?
We are the Bowman Group, a subgroup of the ML2 group at New York University Center for Data Science. We are also affiliated with the NYU Departments of Computer Science and Linguistics.
I have more questions!
Email us through MTurk!