AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) - Keynotes
AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) - Keynotes
AAAI 2025 Fall Symposium
Westin Arlington Gateway, Arlington, VA USA
November 6-8, 2025
Keynote Speakers
KG Tan Endowed Professor of Artificial Intelligence
Syracuse University
Topic
Artificial Metacognition: Theory and Practice
Bio
Paulo Shakarian hold the K.G. Tan Endowed Professorship at Syracuse University. His academic accomplishments include four best-paper awards, over 100 peer-reviewed articles, 12 issued patents, and 8 published books. Shakarian has secured over $7 million in grant funding from various government and industry sponsors. Previously, Shakarian held faculty positions at West Point and Arizona State. He holds a Ph.D. and M.S. in computer science from the University of Maryland and a B.S. in computer science from West Point.
Abstract
The research trend of metacognitive AI deals with the study of artificial intelligence systems that can self-monitor and/or regulate resources. This concept has its roots in cognitive psychology studies on human metacognition. It has led to the understanding of how people monitor, control, and communicate their cognitive processes. An emerging research trend in artificial intelligence is to build systems that possess these capabilities. This talk summarizes the key ideas about metacognition from cognitive psychology, describes recent attempts to instantiate these concepts in AI systems, and discusses metacognitive capabilities observed in humans that are not thoroughly explored in AI research. We specifically examine problems where artificial metacognition is used in vision, language, time-series, and out-of-distribution problems.
VP Research, Technology & Innovation
Thales
Topic
Trustworthy AI for Critical Systems: New Challenges
Bio
David Sadek is VP Research, Technology & Innovation at Thales, notably in charge of Artificial Intelligence and Quantum Computing. A Doctor in Computer Science and an expert in Artificial Intelligence and Cognitive Science, he was Chairman of the Executive committee of the French national industrial program on AI (Confiance.ai) and, previously, SVP Research at IMT (Institut Mines-Télécom) and VP R&D at Orange. He created and ran R&D teams at Orange Labs working on intelligent agents and natural human-machine dialogue, for more than fifteen years. His research work led to the design and implementation of the first worldwide technologies of conversational agents, as well as to the ACL inter-agent communication language standard. He has also directed several industrial transfer and innovative service deployment programmes.
Abstract
TBD
Professor, AI Institute
University of South Carolina
Topic
Extending Transparency Beyond Developers in AI-Driven Decision Making
Bio
Biplav Srivastava is a Professor of Computer Science at the AI Institute and Department of Computer Science at the University of South Carolina which he joined in 2020 after two decades in industrial research. He directs the 'AI for Society' group which is investigating how to enable people to make rational decisions despite the real world complexities of poor data, changing goals and limited resources by augmenting their cognitive limitations with technology. Dr. Biplav Srivastava's expertise is in Artificial Intelligence (reasoning, representation, learning and human-AI interaction), Services (process automation, composition), and Sustainability (governance - elections, water, traffic, health, power). His contributions have led to many science firsts and high-impact commercial innovations valued over billions of dollars, 250+ papers and 75+ US patents issued, and awards for papers, demos and hacks. More details about his group and him are at, respectively, https://ai4society.github.io/ and https://sites.google.com/site/biplavsrivastava/.
Abstract
Current eXplainable AI (XAI) methods largely serve developers, often focusing on justifying model outputs rather than supporting diverse stakeholder needs. A recent shift towards Evaluative AI reframes explanation as a tool for hypothesis testing, but still focuses primarily on operational organizations. We have introduced Holistic-XAI (H-XAI), a unified framework that integrates causal rating methods with traditional XAI methods to support explanation as an interactive, multi-method process. H-XAI allows stakeholders to ask a series of questions, test hypotheses, and compare model behavior against automatically constructed random and biased baselines.
In this keynote, I will describe Holistic-XAI and illustrate its potential for promoting AI trustworthiness and conducting risk assessment for blackbox AI systems. The talk will give an overview of XAI and its renewed focus on promoting user autonomy in decision making, lead into our approach on causality based AI assessment and rating, and then describe how H-XAI combines instance-level and global explanations, adapting to each stakeholder's goals, whether understanding individual decisions, assessing group-level bias, or evaluating robustness under perturbations. We demonstrate the generality of our approach through two case studies spanning six scenarios: binary credit risk classification and financial time-series forecasting. H-XAI fills critical gaps left by existing XAI methods by combining causal ratings and post-hoc explanations to answer stakeholder-specific questions at both the individual decision level and the overall model level.
Associate Professor
Delft University of Technology
Topic
Justifying metric choices in AI trustworthiness assessments
Bio
Stefan Buijsman is an associate professor in philosophy at Delft University of Technology, where he leads the Delft Digital Ethics Centre and the WHO Collaborating Centre on AI for Health Governance, including ethics. His research focuses on connecting ethical principles to design and governance requirements for AI systems, primarily in healthcare and the public sector. In addition to his research he has also written three popular science books on mathematics and AI.
Abstract
AI Trustworthiness Assessments are only as good as the measures used to assess the trustworthiness of these systems. It is therefore paramount that we can justify our choice of metrics in these assessments, especially for difficult to quantify ethical and social values. Here I present a two-step approach to ensure metrics are properly motivated. First, a conception needs to be spelled out (e.g. Rawlsian fairness or fairness as solidarity) and then, second, a metric can be fitted to that conception. Both steps require separate justifications, as conceptions can be judged on how well they fit with the function of, for example, fairness. I argue that conceptual engineering offers helpful tools for this step. Then, metrics need to be fitted to a conception. I illustrate this process through an examination of competing fairness metrics to illustrate that here the additional content that a conception offers helps us justify the choice for a specific metric. In this way, I argue, we can ensure systematic justification of the metrics used in trustworthiness assessments in not only a technical sense, but also in the sense that they are measuring the right thing to capture trust.