Keynotes

From Transparency and Users' Control in Recommender System Interfaces to Guidelines for Visual XAI


Denis Parra

Pontificia Universidad Católica de Chile

In this talk, Prof. Parra will review research from the latest 20 years on visual interfaces for recommender systems considering aspects such as explainability, transparency and users' control, how they relate to the user experience with recommendation systems, and what lessons we can synthesize from that work. Later, he will connect these results with the latest research on visual interfaces for explainable AI (XAI) systems, and will present new ideas and questions for current and future work.


About the speaker

Denis Parra is Associate Professor at the Department of Computer Science, in the School of Engineering at Pontificia Universidad Catolica de Chile. He is also principal researcher at the excellence research centers CENIA (National Center for Research in Artificial Intelligence in Chile, 2021-2031), iHealth (Millennium Institute for Intelligent Healthcare Engineering, 2021-2031), and adjunct researcher at the IMFD (Millennium Institute for Research on Fundamentals of Data). He earned a Fulbright scholarship to pursue his PhD studies between 2008-2013 at the University of Pittsburgh, USA, advised by Professor Peter Brusilovsky, with a doctoral thesis on User Controllability on Recommendation Systems. Prof. Parra has published numerous articles in prestigious journals such as ACM TiiS, ACM CSUR, IJHCS, ESWA, and PloS ONE, as well as in conferences such ACM IUI, ACM RecSys, UMAP, ECIR, Hypertext, and EuroVis among others. Prof. Parra has been awarded a student best paper award at UMAP conference 2011, as well as candidate best paper awards twice at ACM IUI, in 2018 and 2019, for his research on intelligent user interfaces for recommender systems and on AI medical applications. He has also been selected as best reviewer at the Web Conference 2017 and candidate to best reviewer at ACM RecSys 2018 and 2019. Prof. Parra has served as senior PC chair in conferences such as IUI, RecSys, UMAP, SIGIR, The Web Conference and WSDM.

Prof. Parra research interests are Recommender Systems, Intelligent User Interfaces, Applications of Machine Learning (Healthcare, Creative AI) and Information Visualization. He is currently leading the Human-centered AI and Visualization (HAIVis) research group as well as co-leading the CreativAI Lab with professor Rodrigo Cádiz. He is also a faculty member of the PUC IA Lab.

Explainable AI for non-expert users: towards the next generation of interactive and adaptive explanation methods


Katrien Verbert

KU Leuven, Belgium

Despite the long history of work on explanations in the Machine Learning, AI and Recommender Systems literature, current efforts face unprecedented difficulties: contemporary models are more complex and less interpretable than ever. As such models are used in many day-to-day applications, justifying their decisions for non-expert users with little or no technical knowledge will only become more crucial. Although several explanation methods have been proposed, little work has been done to evaluate whether the proposed methods indeed enhance human interpretability. Many existing methods also require significant expertise and are static. Several researchers have voiced the need for interaction with explanations as a core requirement to support understanding. In this talk, I will present our work on explanation methods that are tailored to the needs of non-expert users. In addition, I will present the results of several user studies that investigate how such explanations interact with different personal characteristics, such as expertise, need for cognition and visual working memory.


About the speaker

Katrien Verbert is Associate Professor at the Augment research group of the Computer Science Department of KU Leuven. She obtained a doctoral degree in Computer Science in 2008 at KU Leuven, Belgium. She was a postdoctoral researcher of the Research Foundation – Flanders (FWO) at KU Leuven. She was an Assistant Professor at TU Eindhoven, the Netherlands (2013 – 2014) and Vrije Universiteit Brussel, Belgium (2014 – 2015). Her research interests include visualisation techniques, recommender systems, explainable AI, and visual analytics. She has been involved in several European and Flemish projects on these topics, including the EU ROLE, STELLAR, STELA, ABLE, LALA, PERSFO, Smart Tags and BigDataGrapes projects. She is also involved in the organisation of several conferences and workshops (general co-chair IUI 2021, program chair LAK 2020, general chair EC-TEL 2017, program chair EC-TEL 2016, workshop chair EDM 2015, program chair LAK 2013 and program co-chair of the EdRecSys, VISLA and XLA workshop series, DC chair IUI 2017, DC chair LAK 2019).