Evaluation is a fundamental part of Information Retrieval, and in the conventional Cranfield evaluation paradigm, sets of relevance assessments are a fundamental part of test collections. In this workshop, we wish to revisit how relevance assessments can be efficiently created. A discussion and exploration of this issue will be facilitated through the presentation of results based and position papers on the topic. Participants will also be invited to participate in a design task focusing on developing a benchmarking exercise. After all of these activities the workshop will conclude with an open discussion session, part of which will focus on future directions. As well as providing a forum for discussion and exploration, it is hoped that this workshop will lead to future activities and collaborations in this area.
This workshop is grateful for the support of the European Science Foundation (ESF) research networking programme "Evaluating Information Access Systems" (ELIAS).