July 28, 2011 in Beijing, China

Congratulations to Jun Wang and Bei Yu, winners of the workshop's Best Paper Award (Sponsored by Microsoft Bing)!

9:00 - 9:15: Welcome
  • Workshop Overview
  • Spotlight Crowsourcing Challenge Winner: Mark Smucker
9:15 - 10:00: Invited Talk: Gabriella Kazai, Microsoft Research: Effects of Defensive HIT Design on Crowd Diversity

10:00-10:30 Coffee

10:30-11:25: Poster Session I

The Crowd vs. the Lab: A Comparison of Crowd-Sourced and University Laboratory Participant Behavior
Mark Smucker and Chandra Prakash Jethani
Winner: Crowdsourcing Challenge

Quality Control of Crowdsourcing through Workers Experience
Li Tai, Zhang Chuang, Xia Tao, Wu Ming and Xie Jingjing

Semi-Supervised Consensus Labeling for Crowdsourcing

Wei Tang and Mathew Lease

How Much Spam Can You Take? An Analysis of Crowdsourcing Results to Increase Accuracy
Jeroen Vuurens, Arjen P. De Vries and Carsten Eickhoff

Labeling Images with a Recall-based Image Retrieval Game
Jun Wang and Bei Yu

11:25-12:10: Invited talk: Ian Soboroff, NIST: Experiences and Lessons from Collecting Relevance Judgments

12:10-1:40 Lunch

1:40-2:25: Invited Talk: Praveen Paritosh, Google: Issues of Quality in Human Computation

2:25-3:15: Poster Session II

Genealogical Search Analysis Using Crowd Sourcing
Patrick Schone and Michael Jones

A Comparison of On-Demand Workforce with Trained Judges for Web Search Relevance Evaluation
Maria Stone, Kylee Kim, Suvda Myagmar and Omar Alonso

An Ensemble Framework for Predicting Best Community Answers
Qi Su

Crowdsourced Evaluation of Personalization and Diversification Techniques in Web Search

David Vallet

DEMO: GEAnn - Games for Engaging Annotations
Carsten Eickhoff, Christopher G. Harris, Padmini Srinivasan and Arjen P. de Vries

3:15-3:45 Coffee

3:45-4:30: Panel: Beyond the Lab: State-of-the-Art and Open Challenges in Practical Crowdsourcing.
4:30-5:30: Discussion and Break-outs

5:30-?: Conversation continues at nearby bar...