Call for Papers

Should intelligent autonomous agents always obey human commands or instructions?
We argue that, in some contexts, they should not.
Most existing research on collaborative robots and agents assumes that a “good” agent is one that complies with the instructions it is given and works in a predictable manner under the consent of the human operator it serves (e.g., it should never deceive its operator). The goal of this workshop is to challenge this assumption and to rethink the desired abilities and responsibilities of collaborative agents. These include, for example, exhibiting behavior that attempts appropriate and harm-preventing non-compliance (e.g., safety constraints in autonomous vehicles or training LLMs to avoid potentially harmful or norm-violating output), among others

We accept submissions directly addressing RaD-AI, as well as other topics relevant for RaD-AI, including but not limited to: 

Intelligent Social Agents 

Goal Reasoning
Plan Recognition
Value Alignment
Social Dilemmas 

Human-Agent Interaction 

Human-agent Trust
Interruptions

Deception

Command Rejection

Explainability 

Societal Impacts

Legal and Ethical Reasoning

Liability

AI safety

AI governance 

Submission deadline: January 29 February 26, 2024 (Final Extension)

Notifications: March 11, 2024.


We accept submissions of the following types: 

- Regular Research Papers (up to 6 pages, excluding references)

- Position Papers (up to 2 pages, excluding references)  

- Tool Talks (up to 2 pages, excluding references) 

Papers must be in high-resolution PDF format, formatted for US Letter (8.5" x 11") paper, using Type 1 or TrueType fonts. Reviews are double-blind, and submissions must conform to the AAMAS-24 submission instructions.