We hope to see you next year and on our next initiatives
If you would like to provide feedback on the workshop or to opt in for future RaD-AI events, please fill out this following poll: https://forms.gle/rqN45pq4sP3UGjhdA
Detroit, Michigan (USA) | 20 May 2025
All times are in local time in Detroit, MI (USA) (i.e., EDT, or GMT-4)
09:00-09:15 Introduction (Co-Chair David Aha)
09:15-10:00 Session 1 (Theme: GenAI)
09:15-09:35 GPT-4 as a Moral Reasoner for Robot Command Rejection (Invited HAI-2024 Paper)
Ruchen Wen, Francis Ferraro, & Cynthia Matuszek (University of Maryland, Baltimore County; USA)
09:40-10:00 Disobedience for a Cause: Leveraging Implicit Objectives in User Plans by LLM Surrogates
Dale Peasley, Zachary Gray, & Sandip Sen (University of Tulsa, USA) paper
10:00-10:30 Coffee Break
10:30-12:30 Session 2 (Theme: Mental Modeling & HAI)
10:30-10:50 Mental Model-based Generation of Lies for Insider Threat Modeling
Brittany Cates & Sarath Sreedharan (Colorado State University, USA) paper
10:55-11:10 Rebellion for the Greater Good: When AI Agents Disobey to Optimize Team Performance
Eric Peterson & Sandip Sen (University of Tulsa, USA) paper
11:15-11:35 Obey, Refrain, or Contravene?
Divya Sundaresan (NCST, USA), Gordon Briggs (NRL, USA), & Munindar P. Singh (NCST, USA) paper
11:40-11:55 Frictive Policy Optimization for LLM Agent Interactions
James Pustejovsky (Brandeis University, USA) & Nikhil Krishnaswamy (Colorado State University, USA) paper
12:00-12:30 Discussion: Morning Presentations
12:30-14:00 Lunch
14:00-15:45 Session 3 (Theme: Decision Making)
14:00-14:45 Trustworthy Algorithmic Delegate: An Alignable Decision Maker (Invited Talk)
Everyone wants ethical behavior from autonomous systems, but humans don’t even agree on what ethics to use. In some domains, such as medical triage, there is not a clear “ethical” or “best” decision. Nevertheless, junior people need help with these decisions, and systems will eventually need to make them quickly, without oversight. The Trustworthy Algorithmic Delegate (TAD) is constructed to rapidly acquire a representation of how an individual trusted decision maker makes decisions and apply that standard to new decisions.
Matthew Molineaux (Parallax Advanced Research, USA)
14:50-15:10 R-HTN: Rebellious Online HTN Planning
Hector Munoz-Avila (American University, Paola Rizzo (Intergens, Italy) & David W. Aha (NRL, USA) paper
15:15-15:30 Plan Recognition for Rebel Agent Systems
Michael Cox (Wright State University, USA) paper
15:35-15:50 You said No, What's Next?
Sarath Sreedharan (Colorado State University, USA) and Gordon Briggs (NRL, USA) paper
15:45-16:15 Coffee Break
16:15-17:30 Session 4 (Theme: Related work at AAMAS-25)
16:15-16:35 Model and Mechanisms of Consent for Responsible Autonomy (Invited AAMAS-25 Paper)
Anastasia S. Apeiron (Utrecht University (UU), The Netherlands), Davide Dell’Anna (UU, The Netherlands),
Pradeep K. Murukannaiah (TU Delft, The Netherlands), & Pinar Yolum Birbil (UU, The Netherlands)
16:40-17:15 Discussion: Afternoon Presentations
17:15-17:30 Conclusion and Wrap-up
Dr. Matt Molineaux is Director of AI and Autonomy at Parallax Advanced Research, where he has led a series of Artificial Intelligence efforts for AFRL and DARPA customers. His research focuses on building robust integrated intelligent systems and autonomous agents using technologies including machine learning, automated planning, automated diagnosis, case-based reasoning, plan recognition, search-based optimization, logical inference, and plan recognition. He is a AAAI member with over 50 peer reviewed publications in AI.