Call for Contributions

Topic and themes

The motivation of the workshop is to promote discussion upon future research and practice directions of evaluating explainable recommendations, by bringing together academic and industry researchers and practitioners in the area. We focus in particular on real-world use cases, diverse organizational values and purposes, and different target users. We encourage submissions that study different explanation goals and combinations of those, how they fit various organization values and different use cases. Furthermore, we welcome submissions that propose and make available for the community high-quality datasets and benchmarks.


Topics include, but are not limited to:

  • Evaluation

    • Relevance of explanation goals for different use cases;

    • Soliciting user feedback on explanations;

    • Implicit vs. explicit evaluation of explanations and goals;

    • Reproducible and replicable evaluation methodologies;

    • Online vs. offline evaluations.

  • Personalisation

    • User-modelling for explanation generation;

    • Evaluation approaches for personalised explanations (e.g., content, style);

    • Evaluation approaches for context-aware explanations (e.g., place, time, alone/group setting, exploratory/transaction mode).

  • Presentation

    • Evaluation of different explanation modalities (e.g., text, graphics, audio, hybrid);

    • Evaluation of interactive explanations.

  • Datasets

    • Generation of datasets for evaluation of explanations;

    • Evaluation benchmarks.

  • Values

    • Evaluation of explanations in relation to organisational values;

    • Evaluation of explanations in relation to personal values



Submissions

We welcome three types of submissions:

  • position or perspective papers (up to 4 pages in length, plus unlimited pages for references): original ideas, perspectives, research vision, and open challenges in the area of evaluation approaches for explainable recommender systems;

  • featured papers (title and abstract of the paper, plus the original paper): already published papers or papers summarizing existing publications in leading conferences and high-impact journals that are relevant for the topic of the workshop;

  • demonstration papers (up to 2 pages in length, plus unlimited pages for references): original or already published prototypes and operational evaluation approaches in the area of explainable recommender systems.

Page limits include diagrams and appendices. Submissions should be single-blind, written in English, and formatted according to the current ACM two-column conference format. Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use “sigconf” proceedings template for LaTeX and the Interim Template for Word).


Submit papers electronically via EasyChair: https://easychair.org/my/conference?conf=quare22


Accepted papers will be published on this website. At least one author of each accepted paper is required to register for the workshop and present the work.