AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC)
AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC)
AAAI 2025 Fall Symposium
Westin Arlington Gateway, Arlington, VA USA
November 6-8, 2025
Registration
Please be aware that registration fees will rise after October 3rd.
The final day to reserve a room in our block is October 18th. As we sold out last year, we advise you to book your hotel room as early as possible.
About The ATRACC Symposium Session
AI systems, including those built on large language and foundational/multi-modal models, have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, and often unanticipated, beneficial behaviors. However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. The design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons. Assessment of trustworthiness should be made at both, the full system level and at the level of individual AI components. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI, provide traceability, and quantify risk.
The focus of this symposium is on AI trustworthiness broadly and methods that help provide bounds for fairness, reproducibility, reliability, and accountability in the context of quantifying AI-system risk, spanning the entire AI lifecycle from theoretical research formulations all the way to system implementation, deployment, and operation. This symposium will bring together industry, academia, and government researchers and practitioners who are vested stakeholders in addressing these challenges in applications where a priori understanding of risk is critical.
Topics of interest include, but are not limited to:
Agentic AI: addressing challenges related to autonomy and safety, including multi-agent systems with an emphasis on robustness, reliability, accountability, and emergent behaviors in risk-averse contexts.
Pluralistic alignment: approaches to AI alignment for addressing the diverse and often conflicting perspectives, values, and needs of different users.
AI benchmarking and evaluation: theoretical and empirical methods for analyzing the capabilities of foundation models, including benchmark design, formal guarantees, and multimodal AI evaluation.
Methods and approaches for enhancing and evaluating reasoning in general purpose AI systems, e.g., causal reasoning techniques and outcome verification approaches.
Assessment of non-functional requirements such as explainability, accountability, and privacy as well as assessment from pilot stage to systematic evaluation and monitoring.
Approaches for verification and validation of AI systems, including evaluation of different aspects such as factuality and trustworthiness.
Evaluation of AI systems vulnerabilities and risks, including adversarial and red-teaming approaches.
Links between performance and trustworthiness leveraged by AI sciences, system and software engineering, metrology, and Social Sciences and Humanities.
User studies and evaluation of governance mechanisms in organizations and communities.
For more information on topics, see our Call for Papers page.
AAAI Fall Symposium Series Website: https://aaai.org/conference/fall-symposia/fss25/