Welcome to the ACM SIGIR GEAR workshop homepage!

Friday July 11th 2014 
Gold Coast Convention Centre, Australia

Evaluation is a fundamental part of Information Retrieval, and in the conventional Cranfield evaluation paradigm, sets of relevance assessments are a fundamental part of test collections. In this workshop, we wish to revisit how relevance assessments can be efficiently created. A discussion and exploration of this issue will be facilitated through the presentation of results based and position papers on the topic. Participants will also be invited to participate in a design task focusing on developing a benchmarking exercise. After all of these activities the workshop will conclude with an open discussion session, part of which will focus on future directions. As well as providing a forum for discussion and exploration, it is hoped that this workshop will lead to future activities and collaborations in this area.

Workshop area of interest: 

  • How the method of generating assessments, via conventional means or crowdsourcing, affects the judgments gathered, such as issues of assessor expertise, payment, etc.
  • The process by which individuals, or groups of individuals, assess documents (text, image, video etc.) for relevance 
  • Issues relating to the effort required to generate relevance assessments for different types of topic, and different types of material (text, web, image, video, etc. and multiple languages)
  • To revisit the concept of “relevance”, from a practical, operational standpoint, for the purposes of IR evaluation.
Acknowledgements:

This workshop is grateful for the support of the European Science Foundation (ESF) research networking programme "Evaluating Information Access Systems" (ELIAS).