Call for Abstracts

We welcome ~250-word abstracts describing methodologies, studies or systems relevant to the topics of the workshop. Submissions are not anonymous. Non published work, vision statements, and work in progress are welcome.

We are looking for contributions with interesting insights, which could lead to a productive discussion during the symposium. The main criteria of evaluation of the Programme Committee are scientific relevance, innovation level and research potential.

Abstract should be submitted at

Topics of interest include, but are not limited to:

Biases in Human Computation and Crowdsourcing

  • Human sampling bias
  • Effect of cultural, gender and ethnic biases
  • Effect of human in the loop training and past experiences
  • Effect of human expertise vs interest
  • Bias in experts vs bias in crowdsourcing
  • Bias in outsourcing vs bias in crowdsourcing
  • Bias in task selection
  • Task assignment/recommendation for reducing bias
  • Effect of human engagement on bias
  • Responsibility and ethics in human computation and bias management
  • Preventing bias in crowdsourcing and human computation
  • Creating awareness of cognitive biases among human agents
  • Measuring and addressing ambiguities and biases in human annotation
  • Human factors in AI

Using Human Computation and Crowdsourcing for Bias Understanding and Management

  • Biases in Human-in-the-loop systems
  • Identifying new types of cognitive bias in data or content
  • Measuring bias in data or content
  • Removing bias in data or content
  • Dealing with algorithmic bias
  • Fake news detection
  • Diversification of sources by means
  • Provenance and traceability
  • Long-term crowd engagement
  • Generating benchmarks for bias management