Should intelligent autonomous agents always obey human commands or instructions?
We argue that, in some contexts, they should not.
Most existing research on collaborative robots and agents assumes that a “good” agent is one that complies with the instructions it is given and works in a predictable manner under the consent of the human operator it serves (e.g., it should never deceive its operator). The goal of this workshop is to challenge this assumption and to rethink the desired abilities and responsibilities of collaborative agents. These include, for example, exhibiting behavior that attempts appropriate and harm-preventing non-compliance (e.g., safety constraints in autonomous vehicles or training LLMs to avoid potentially harmful or norm-violating output), among others.
Intelligent Social Agents
Goal Reasoning
Plan Recognition
Theory of Mind
Value Alignment
Foundation Models Guardrails
Social Dilemmas
Human-Agent Interaction
Human-agent Trust
Interruptions
Deception
Command Rejection
Explainability
Corrigibility
Societal
Impacts
Legal and Ethical Reasoning
Norms in Foundation Models
Liability
AI Safety
AI Governance
Submission deadline: 4 February 2026
Author notifications: 20 March 2026
We welcome submissions of the following types:
- Regular Research Papers (up to 8 pages, excluding references)
- Short Research Papers (up to 4 pages, excluding references)
- Position Papers (up to 4 pages, excluding references)
- Tool Talks (up to 4 pages, excluding references)
Papers must be in high-resolution PDF format, formatted for US Letter (8.5" x 11") paper, using Type 1 or TrueType fonts. Reviews are double-blind, and submissions must conform to the AAMAS-26 submission instructions.