Panelists

Explanation Evaluation

How to evaluate the quality of explanations through both subjective and objective measures?

Olfa Nasraoui

Olfa Nasraoui is full Professor of Computer Science & Engineering, endowed Chair of e-commerce, and the founding director of the Knowledge Discovery & Web Mining Lab, in the Speed School of Engineering at University of Louisville. She conducts research in machine learning, AI, and data science, in particular web mining, information retrieval and recommender systems; fairness and explainability in AI; and mining high dimensional, heterogeneous and evolving data streams.  She is a National Science Foundation CAREER award winner and twice winner of Best Paper Awards in the research area of machine learning and Fair AI. The first Best Paper Award is in Theoretical Developments in Computational Intelligence at Artificial Neural Network In Engineering (ANNIE 2001), on robust clustering algorithms for web usage mining. The second Best Paper Award is at KDIR 2018, for research on modeling and studying the impact of algorithmic bias in machine learning and recommender systems. She has served as Primary Investigator or Co-Investigator for over 20 research grant projects, funded from diverse sources, including the National Science Foundation and NASA.  Olfa leads as PI the NSF funded ATHENA ADVANCE initiative for faculty equity at University of Louisville. She serves as Associate Editor  for the Recommender Systems section of Frontiers in Big Data, and on the Editorial board of the International Journal of Machine Learning and Computing and Applied Intelligence. She has also served as Associate Editor for IEEE Access and guest editor for Data Mining and Knowledge Discovery. ​​She has served on the organizing and program committees of several conferences and workshops, including co-organizing the premier series of workshops on Web Mining, WebKDD 2004-2008, as part of ACM-KDD. She has also served as Program Committee Vice-Chair, Track Chair, or Senior Program Committee member for several data mining conferences including ACM RecSys, KDD, AAAI, IJCAI, ICDM, SDM, and CIKM. She is a member of ACM, ACM SIG-KDD, AAAS, and a senior member of IEEE.

Nava Tintarev

Nava Tintarev is a Full Professor of Explainable Artificial Intelligence at the University of Maastricht. She leads or contributes to several projects in the field of human-computer interaction in artificial advice-giving systems, such as recommender systems; specifically developing the state-of-the-art for automatically generated explanations (transparency) and explanation interfaces (recourse and control). She currently participates in a Marie-Curie Training Network on Natural Language for Explainable AI. She is also representing Maastricht University as a Co-Investigator in the ROBUST consortium, selected for a national (NWO) grant with a total budget of 95M (25M from NWO) to carry out long-term (10-years) research into trustworthy artificial intelligence, and co-director of the TAIM lab on trustworthy media. She regularly shapes international scientific research programs (e.g., on steering committees of journals, or as program chair of conferences), and actively organizes and contributes to high-level strategic workshops relating to responsible data science, both in the Netherlands and internationally. She has published around 100 peer-reviewed papers in top human-computer interaction and artificial intelligence journals and conferences such as UMUAI, TiiS, ECAI, ECIR, IUI, Recsys, and UMAP. These include best paper awards at Hypertext, CHI, HCOMP, and CHIIR.

Chun-Hua Tsai

Chun-Hua Tsai is an Assistant Professor in the Department of Information Systems and Quantitative Analysis (ISQA) at the College of Information Science & Technology, University of Nebraska at Omaha (UNO). Before joining UNO, He served as an assistant research professor at the College of Information Sciences and Technology at Penn State University. He earned his Ph.D. from the University of Pittsburgh in 2019. His primary research focus lies in enhancing user comprehension of the underlying principles driving data and computing methodologies. His research team is dedicated to ensuring that the logic and data behind recommendations are accessible and intelligible to individuals without specialized AI training. This research entails developing everyday explanations that can benefit even non-expert users or those with limited AI literacy, drawing from diverse fields such as HCI, social science, cognitive science, psychology, and more. His research goal is to consider the user's mental model alongside domain experts' scientific intuition, fostering the design of explainable AI that empowers human-AI interaction within complex sociotechnical systems. He has been awarded federal grants from NSF CRII and DASS and multiple state and university-level grants to support his research endeavors.

Explanation generation

How to balance user needs and organisational objectives when explaining recommendations?

Robin Burke

Robin Burke is a Professor in the Department of Information Science at the University of Colorado, Boulder, and director of That Recommender Systems Lab, where he conducts research in recommender systems, especially fairness, accountability and transparency in recommendation through the integration of objectives from diverse stakeholders. Dr Burke obtained his PhD in Computer Science from Northwestern University in 1993 and has held positions at the University of Chicago, the University of California – Irvine, California State University – Fullerton, DePaul University, and University College Dublin. His work has received support from the National Science Foundation, the National Endowment for the Humanities, the Fulbright Commission and the MacArthur Foundation, among others.

Upol Ehsan

Upol Ehsan cares about people first, technology second. He is a Researcher and a Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining his expertise in AI and background in Philosophy, his work in Explainable AI (XAI) aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His research has a personal origin story – he was wrongfully detained at an airport due to an automated system’s error, which no one could explain or hold anyone accountable for. Focusing on how our values shape the use and abuse of technology, his work has pioneered the area of  Human-centered Explainable AI (a sub-field of XAI). Actively publishing in top tier research venues like ACM CHI, CSCW, FAccT, IUI, and AIES, his work has received multiple awards and has been covered in major media outlets (e.g., MIT Technology Review, Vice, VentureBeat). By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is an advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets

Francesco sovrano

Francesco is a senior researcher at the University of Zürich, Institute for Informatics (https://www.ifi.uzh.ch/en/zest/team/francesco_sovrano.html). He received a PhD in Data Science and Computation in June 2023, top marks, from the University of Bologna in association with the Polytechnic of Milan. In 2022, he had been a visiting researcher at the Prorok Lab of the University of Cambridge. In 2021, he had won a MSCA RISE grant for a project about “Explanatory AI in Education” carried out in collaboration with the Law School of the University of Pittsburgh. His research interests lie at the intersection of Data Science, Explainable Artificial Intelligence (XAI), Law and Software Engineering. Specifically, he specialised in theory of explanations and its applications in artificial intelligence for improving learning and the acquisition of information through explanations.

Explanation operationalisation

How could be a typical MVP (Minimum Viable Product) for explanations of recommendations? What are the most basic features we may need?

Mohammad naiseh

Mohammad (Mo) Naiseh holds the position of Lecturer in AI and Data Science at Bournemouth University, UK. His ultimate research objective is to develop AI systems that align with human values and ethics, ensuring their positive impact on society and the economy. To achieve this, he explores the integration of emerging XAI techniques with human-centred design practices.

Cecilia panigutti

Cecilia Panigutti is a researcher at the European Centre for Algorithmic Transparency (ECAT), which is part of the European Commission and it is hosted by the Joint Research Centre (JRC). Her work centres on algorithmic transparency, auditing, and AI standardization, with a particular focus on bridging the gap between scientific research and policy-making. At ECAT, Cecilia works on AI auditing in the context of the Digital Services Act (DSA), and also conducts scientific research to offer insights into emerging technologies. She earned her Ph.D. from Scuola Normale Superiore, with a thesis on eXplainable AI and its applications in healthcare. Cecilia also holds a master degree in Physics of Complex Systems.

Valeria Resendez

Valeria is a PhD candidate in Political Communication at the Department of Communication Science at the University of Amsterdam. She is affiliated with ASCoR and the Program Group Political Communication and is also a research member of the AI, Media, and Democracy Lab. Valeria completed her master's degree (MSc in Communication, specialized in Technology, cum laude) at the University of Twente. Her research interests are centered around conversational agents and a specific type of news recommender. Additionally, she delves into the transparency and explainability mechanisms in recommendation systems. Before pursuing her Ph. D., Valeria worked in the field of artificial intelligence and automation to enhance the customer experience in Mexico. She also collaborated with some Mexican institutions, such as CMinds and the IA2030MX coalition, to develop responsible AI frameworks in the country.