Disentangling the Relation Between Crowdsourcing and Bias Management

CrowdBias Workshop co-located with HCOMP2018

Crowdsourcing has become a successful method to obtain the human computation needed to augment algorithms (Demartini et al., 2017) and perform high quality data management (Marcus et al., 2015). Humans, though, have various cognitive biases that influence the way they interpret statements, make decisions and remember information. If we use crowdsourcing to generate ground truth, it is important to identify existing biases among crowdsourcing contributors and analyze the effects that their biases may produce. At the same time, having access to a potentially large number of people can give us the opportunity to handle the biases in existing data and systems.

The goal of this workshop is to analyze both existing biases in crowdsourcing, and methods to manage bias via crowdsourcing. We will discuss different types of biases, measures and methods to track bias, as well as methodologies to prevent and solve bias.

We will provide a framework for discussion among scholars, practitioners and other interested parties, including crowd workers, requesters and crowdsourcing platform managers.

We expect contributions combining ideas from different disciplines, including computer science, psychology and economy.


Gianluca Demartini, Djellel Eddine Difallah, Ujwal Gadiraju, and Michele Catasta. 2017. An Introduction to Hybrid Human-Machine Information Systems. Found. Trends Web Sci. 7, 1 (December 2017), 1-87. DOI:

Adam Marcus and Aditya Parameswaran (2015), "Crowdsourced Data Management: Industry and Academic Perspectives", Foundations and Trends® in Databases: Vol. 6: No. 1-2, pp 1-161.