AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) - Accepted Papers
AI Trustworthiness and Risk Assessment for Challenged Contexts (ATRACC) - Accepted Papers
AAAI 2025 Fall Symposium
Westin Arlington Gateway, Arlington, VA USA
November 6-8, 2025
Accepted Papers
The Anatomy of a Trustworthy AI Answer: A Comparative Experiment for RAG Architectures
Dippu Kumar Singh
Uncovering Systemic and Environment Errors in Autonomous Systems Using Differential Testing
Yashwanthi Anand, Rahil P Mehta, Manish Motwani and Sandhya Saisubramanian
A Brief Overview of Key Quality Metrics for Knowledge Graph Solution. Illustration on Digital NOTAMs
Juliette Mattioli, Lucas Mattioli and Martin Gonzalez
Introducing RUM: A Methodological Contribution for Engineering Trustworthy AI Components in Industrial Systems
Martin Gonzalez, Loic Cantat and Kevin Pasini
Rashomon in the Streets: Explanation Ambiguity in Scene Understanding
Helge Spieker, Jørn Eirik Betten, Arnaud Gotlieb, Nadjib Lazaar and Nassim Belmecheri
Error Detection and Correction for Interpretable Mathematics in Large Language Models
Yijin Yang, Cristina Cornelio, Mario Leiva and Paulo Shakarian
Grounded Instruction Understanding with Large Language Models: Toward Trustworthy Human-Robot Interaction
Ekele Ogbadu, Stephanie Lukin and Cynthia Matuszek
Data Drift Detection and Assessment for AI-hybrid Models Applied on Electrical Energy Consumption
Faouzi Adjed, Milad Leyli-Abadi, Elies Gherbi and Martin Gonzalez
Utilizing SBOM for Transparent AI Risk Communication
Lennard Helmer, Lisa Fink and Maximilian Poretschkin
Enhancing Trustworthiness in VAD with Rule-Based VLM-LLM Explanations
Mohaled Ibn Khedher, Faouzi Adjed and Joseph Kattan
The Map of Misbelief: Tracing Intrinsic and Extrinsic Hallucinations Through Attention Patterns
Elyes Hajji, Aymen Bouguerra and Fabio Arnez
ZAAS: Zonal Aware Anomaly Score for Time Series
Elies Gherbi, Nabil Ait Said, Faouzi Adjed and Achraf Kallel
Interactive Simulations of Backdoors in Neural Networks
Peter Bajcsy and Maxime Bros
Empirical Evidence for Alignment Faking in Small LLMs and Prompt-Based Mitigation Techniques
J Koorndijk
Query-Based Model Extraction Attack on GCN: A Surrogate Model Technique for Non-Euclidean Data
Sibtain Syed, Alvi Ataur Khalil, Kishor Datta Gupta, Saima Jabeen and Mohammad Ashiqur Rahman
Continuous Monitoring of Large-Scale Generative AI via Deterministic Knowledge Graph Structures
Kishor Datta Gupta, Mohd Ariful Haque, Hasmot Ali, Marufa Kamal, Sayd Bahauddin Alam and Mohammad Ashiqur Rahman
Ethics2vec: aligning automatic agents and human preferences
Gianluca Bontempî
Assessing the Geolocation Capabilities, Limitations and Societal Risks of Generative Vision-Language Models
Oliver Grainge, Sania Waheed, Jack Stilgoe, Michael Milford and Shoaib Ehsan
Identifying the Supply Chain of AI for Trustworthiness and Risk Management in Critical Applications
Raymond Sheh and Karen Geappen
Challenges and Choices when Evaluating Alignment in Human-AI Systems
Jennifer McVay and Ewart de Visser
Bridging AI and Health on Time Series Analysis and Explainability Using the Case Study of EEG Channel Selection Problem
Vandana Srivastava and Biplav Srivastava
Artificial Insurance: Exposing the Coverage, Controls, and Measurement Gaps of Insurance for AI Risks
Erin Kenneally and Erin Kenneally
On Identifying Why and When Foundation Models Perform Well on Time-Series Forecasting Using Automated Explanations and Rating
Michael Widener, Kausik Lakkaraju, John Aydin and Biplav Srivastava
Position: LLMs Need To Go Beyond Computational Confidence Metrics to Establish Trust
Anil B Murthy and Lindsay Sanneman