Project Results‎ > ‎

Quality Assurance Mechanisms

In contrast to machine clouds, the quality of tasks results obtained from human clouds can vary significantly. Especially cheating workers, unclear instructions, or a lack of qualification can lead to low quality results. Crowdsourcing tasks are completed remotely by anonymous workers without any supervision. This anonymity may encourage workers to cheating, i.e. they try increasing their income by intentionally using malicious techniques, even if the expected gains are rather small. Besides intentional cheating, issues caused by the task design can also result in low quality results. However, it is difficult to identify those problems, e.g., misleading instructions, as a direct interaction between workers and employers is usually not possible.

Numerous efforts have already been made to improve the quality of the task results submitted by the workers. Most approaches try to assess the quality of an individual worker, use group- or workflow-based mechanisms to level out individual erroneous results, or optimize the task design. Existing guidelines for an optimal task design suggest  that cheating should take longer then completing the task properly. Moreover, is it suggested that discouraging cheaters by an appropriate task design ist more efficient then detecting them, and it was previously observed that depending on the type of task cheaters are encountered more or less frequently. However,  not only the task type, but also the design parameters task length, monetary reward, and time required for task completion influence the amount of cheaters attracted by a task.

Within this project we support the efforts of optimizing the quality of crowdsourcing tasks results by extending existing work in two directions. First, we demonstrate dan approach for assessing the quality of an individual worker , second we provide a numerical model for evaluating the costs and accuracy of two wide- spread quality assurance workflow. To this end, we showed that an analysis of the worker’s interactions with the task interface can be used to estimate the quality of the task results in [1]. We use an exemplary language skill assessment task and a web-based interaction monitoring toolset to evaluate the feasibility of this approach. [2] presents an analytic model for two group-based quality assurance mechanisms. Using this model we evaluated the accuracy and also the costs for both approaches for different types of crowdsourcing tasks.


[1] Matthias Hirth, Sven Scheuring, Tobias Hoßfeld, Christopher Schwartz und Phuoc Tran-Gia. „Predicting Result Quality in Crowdsourcing Using Application Layer Monitoring“. In: Proceedings in Conference on Communications and Electronics. Danang, Vietnam, Juli 2014.
[2]Matthias Hirth, Tobias Hoßfeld und Phuoc Tran-Gia. „Analyzing Costs and Accuracy of Validation Mechanisms for Crowdsourcing Platforms“. In: Mathematical and Computer Modelling 57.11-12 (Dez. 2013)