INVITED SPEAKERS
Title: "Debating AI"
Debates recently emerged as a powerful mechanism to address various challenges in contemporary AI, such as AI safety, to improve LLM performance, and for explaining (post-hoc or natively) classifiers. In much of this literature, debates are unstructured and inform other entities that decide the debates’ outcome. Thus, while these debates provide a justification for the outcomes, these outcomes are not faithfully explained by the debates. In this talk, I will explore how computational argumentation can support conducting and evaluating debates to obtain faithful explanations and contestability of outcomes, across a number of settings such as claim verification, classification, and bias detection.
Title: "Privacy vs. Explainable AI: Must We Choose?"
We are witnessing the widespread adoption of AI systems powered by advanced machine learning models, increasingly applied in critical domains such as healthcare, finance, and credit scoring. In these high-stakes contexts, it is essential to design Trustworthy AI systems that ensure both interpretability of decision-making processes and protection of individual privacy. In this talk, we will explore the complex relationship between explainability and privacy—two fundamental ethical pillars of responsible AI. We will address key research questions, including: Can explanations inadvertently compromise individual privacy? How can we guarantee privacy protection for explainable AI models? Through this discussion, we aim to shed light on the opportunities, trade-offs, and design principles that can help align these goals in the development of transparent, privacy-preserving AI systems.
@ECAI 2025 - Workshop "Multimodal, Affective and Interactive eXplainable Artificial Intelligence" (MAI-XAI 25)