QUARE @ SIGIR 2022
1st Workshop on Measuring the Quality of Explanations
in Recommender Systems
quā-rē, adv. [quae-res] I. Interrog., by what means? how? II. From what cause, on what account, wherefore, why.
What is QUARE?
QUARE—Measuring Quality of Explanations in Recommender Systems—is the first workshop that aims to promote discussion upon future research and practice directions around evaluation methodologies for explanations in recommender systems. This half day workshop aims to promote discussion and outline previous, current, and future research directions in the field of explanations in recommender systems, by bringing together researchers and practitioners from academia and industry. In particular, we want to stimulate reflections around methods to systematically and holistically assess explanation approaches and goals, at the interplay between organisational and human values.
We welcome submissions of previously published and ongoing research, position papers, application-oriented papers, theoretical and empirical studies addressing opportunities, challenges, and solutions on one or more of the topics of the workshop.
Motivation
Recommendations are ubiquitous in many contexts and domains due to a continuously growing adoption of decision-support systems. Recommendations are often provided to help us decide which items to buy, which news items to read or watch, or even to which educational institutions or job positions to apply. Explanations may be provided along with recommendations with the reasoning behind suggesting a particular item [4]. However, explanations may also significantly affect a user’s decision-making process by serving a number of different goals [8], such as transparency, persuasiveness, and effectiveness, among others.
In the last few years, a growing number of papers have been published on the Explainability of Recommender Systems, at venues such as but not limited to SIGIR [1–4, 9] and RecSys [10]. Despite a vast amount of research [11], the evaluation of recommendation explanations is still an area where significant gaps remain. For example, as of yet, there is no consensus if there exists a one-size-fits-all good explanation or how to measure its quality [7]. Furthermore, the relationship between the quality and effects of explanations has not been investigated in-depth yet [1, 5].
The lack of established, actionable methodologies to evaluate explanations for recommendations, as well as evaluation datasets, hinders cross-comparison between different explainable recommendations approaches, and is one of the issues hampering widespread adoption of explanations in industry settings.
A public service broadcaster may want to support its audience-facing recommender systems with explanations whose main intents are to explain how the system works and ensure users have confidence in it. The same broadcaster may want to build an internal tool to allow scrutiny from its editorial team. On the other hand, engagement maximization purposes through diverse content recommendations may drive a commercial media platform to focus more on persuasive and efficient explanations. Conversely, end-users of a recommender system may be bearers of different values, and explanations can affect them differently [6]. For instance, if a user values transparency and trust and expects these from an organisation, they may be put off by explanations that primarily aim to persuade them to consume more content. Different organizational values may require a different combination of explanation goals; also, within the same organization, some combinations of goals may be more appropriate for some use cases and less for others. Therefore, understanding whether explanations are fit for their intended goals is key to subsequently implementing them in a production stage.
This workshop aims to extend existing work in the field by bringing together and facilitating the exchange of perspectives and solutions from industry and academia, and aims to bridge the gap between academic design guidelines and the best practices in the industry regarding the implementation and evaluation of explanations in recommender systems, with respect to their goals, impact, potential biases, and informativeness. With this workshop, we provide a platform for discussion among scholars, practitioners, and other interested parties.
References
[1] Krisztian Balog and Filip Radlinski. 2020. Measuring Recommendation Explanation Quality: The Conflicting Goals of Explanations. In SIGIR. ACM, 329–338.
[2] Krisztian Balog, Filip Radlinski, and Shushan Arakelyan. 2019. Transparent, Scrutable and Explainable User Models for Personalized Recommendation. In SIGIR. ACM, 265–274.
[3] Zuohui Fu, Yikun Xian, Ruoyuan Gao, Jieyu Zhao, Qiaoying Huang, Yingqiang Ge, Shuyuan Xu, Shijie Geng, Chirag Shah, Yongfeng Zhang, and Gerard de Melo. 2020. Fairness-Aware Explainable Recommendation over Knowledge Graphs. In SIGIR. ACM, 69–78.
[4] Deepesh V. Hada, Vijaikumar M, and Shirish K. Shevade. 2021. ReXPlug: Explainable Recommendation using Plug-and-Play Language Model. In SIGIR. ACM, 81–91.
[5] Chen He, Denis Parra, and Katrien Verbert. 2016. Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities. In Expert Systems with Applications 56 (2016), 9–27.
[6] Martijn Millecamp, Cristina Conati, and Katrien Verbert. 2022. “Knowing me, knowing you”: personalized explanations for a music recommender system. User Modeling and User-Adapted Interaction (2022), 1–38.
[7] Ingrid Nunes and Dietmar Jannach. 2017. A systematic review and taxonomy of explanations in decision support and recommender systems. In User Model. User Adapt. Interact. 27, 3-5 (2017), 393–444.
[8] Nava Tintarev and Judith Masthoff. 2015. Explaining Recommendations: Design and Evaluation. In Recommender Systems Handbook. Springer, 353–382.
[9] Khanh Hiep Tran, Azin Ghazimatin, and Rishiraj Saha Roy. 2021. Counterfactual Explanations for Neural Recommenders. In SIGIR. ACM, 1627–1631.
[10] Yikun Xian, Tong Zhao, Jin Li, Jim Chan, Andrey Kan, Jun Ma, Xin Luna Dong, Christos Faloutsos, George Karypis, S. Muthukrishnan, and Yongfeng Zhang. 2021. EX3: Explainable Attribute-aware Item-set Recommendations. In RecSys. ACM, 484–494.
[11] Yongfeng Zhang and Xu Chen. 2020. Explainable Recommendation: A Survey and New Perspectives. In Found. Trends Inf. Retr. 14, 1 (2020), 1–101.