Keynotes

DIETMAR JANNACH

University of Klagenfurt

XITING WANG

Microsoft Research Asia

Explaining Recommendations to End Users: Simple or Complex?

Questions of how to explain recommendations to end users have been studied in academia for more than twenty years. In this talk we will first review a selection of earlier studies that aimed to understand the effects of different forms of explanations on the beliefs and behavior of end users. We will find that these studies have in common that they (i) are based on the implicit understanding that explanation is a problem of human computer interaction, and (ii) that the effects of explanations should be evaluated with respect to their intended purpose. In recent years, with the growing importance of Explainable AI, an increased use of computational evaluation approaches can be observed. We will discuss the limitations and potential risks that come with such evaluations and emphasize the importance of approaching the problem of designing and evaluating explanations in a purpose-oriented way based on insights from the social sciences.

Evaluating Explainability in Recommendations: Current Practices and Opportunities

Explainability has been shown essential for recommender systems. As a bridge between users and recommender systems, explanations are effective in increasing user trust, satisfaction, and the probability that a user adopts the recommendation. While explainable recommendation is known to be important and has attracted increasing attention, research on explainability is largely hindered by the concerns on evaluation, e.g., how can we measure users' perception on the explanations in an offline setting? In this talk, we will present the major categories of evaluation methods and how we may choose them, by summarizing the existing literature on explainable recommendation. Moreover, we will discuss about the limitation of current evaluation methods, major challenges, and point out potential opportunities for solving these limitations.

Bio:

Dietmar Jannach is a professor of computer science at the University of Klagenfurt, Austria. His main research theme is related to the application of intelligent system technology to practical problems and the development of methods for building knowledge-intensive software applications. In recent years, he worked on various topics in the area of recommender systems. In this area, he also published the first international textbook on the topic.

Bio:

Xiting Wang is now a senior researcher at Microsoft Research Asia. She received her Ph.D. degree in Computer Science from Tsinghua University in 2017, and a B.S. degree in Electronics Engineering from Tsinghua University. Her research interests include explainable AI and AI for social good. She has published around 10 papers on explainable recommendation in reputable international conferences, and deliver products on explainable recommendation in Microsoft. Her academic papers are published in KDD, ICML, SIGIR, TKDE, TVCG and VAST and receive more than 1,500 citations. One of her first author papers has been chosen as the TVCG spotlight article for Dec. 2016. She was awarded Best SPC by AAAI 2021 and is a SPC or program committee member of many top conferences.