LaMAS 2026 focuses on the emerging field of multi-agent systems powered by LLMs. As interest in using multiple LLM agents to solve complex problems grows, our main objective is to systematically address the critical challenges and opportunities that arise from their interaction. We aim to bridge the gap in understanding failure modes, alignment challenges, and responsible behavior in these systems. The workshop will foster discussion on making these systems powerful, transparent, verifiable, and aligned with human intent.
contact us at: aaai-lamas2026@googlegroups.com
We welcome both short papers (up to 4 pages) and long papers (up to 8 pages) following the AAAI template. Submissions may include recently published work, under-review papers, work in progress, and position papers. All submissions will undergo peer review through a double-blind process. While workshop publication is non-archival, accepted papers will be featured on our website with the author's permission. Topics of interest include but are not limited to the following:
Organization of multiple LLM agents, including their interaction paradigms, coordination strategies, and communication protocols.
Evaluation and Optimization for assessing LaMAS's performance with emergent behaviors, and fine-tuning techniques to enhance its efficiency and effectiveness.
Safety, Trust, and Responsibility about how to design LaMAS that are safe, aligned, and trustworthy. This includes responsible agent behavior, human-in-the-loop oversight, and regulatory frameworks that ensure accountability, fairness, and transparency in real-world deployments.
Real-World Applications spanning software development, scientific research, education, and business processes, with infrastructures for large-scale multi-agent LLM deployments.
Submission site: https://openreview.net/group?id=AAAI.org/2026/Workshop/LaMAS
Paper Submission Start: 25 Sep 2025 (AoE)
Abstract Registration Due: 27 Oct 2025 (AoE)
Paper Submission Due: 3 Nov 2025 (AoE)
Notification of Paper Acceptance: 14 Nov 2025 (AoE)Â
Updated: Notification of Paper Acceptance: 21 Nov 2025 (AoE)Â
Camera-Ready Paper Due: 6 Jan 2026 (AoE)Â
Workshops Date: 27 Jan 2026, 9 AM - 6 PM
The detailed schedule is continuously updating.
Professor @ Shanghai Jiao Tong University
Co-Founder & CEO @ Agnes AI, Singapore
University of Oxford
We're entering a paradigm shift from autonomous agents to sovereign agents—AI systems that can hold private keys, manage assets, fork themselves, and persist beyond any platform's control. If you encounter agents in the future "agentic web," who's in front of you? Can you trust that agent?
It's starting to look less like software and more like digital wildlife: emergent, unpredictable, ungovernable, and hiding a shoggoth behind a friendly mask. But here's the hard question: Who is accountable when an ungovernable sovereign agent causes harm?
LLMs are playing the Imitation Game—they are not mortal beings. They cannot truly feel pain, fear death, or carry consequences forward. After training, they are static files. Slashing an agent or downgrading its reputation doesn't create any real "urge to change" in their neural system, because nothing inside the model can suffer—just external fakeable memory as their fact. If agents cannot feel pain, some [body] must, responsibly—otherwise, eventually somebody will, unexpectedly.
Ashall Professor @ University of Oxford
Director @ the Brookings Artificial Intelligence and Emerging Technology (AIET) Initiative
Senior Research Scientist @ the National Institute of Standards and Technology (NIST)
Gopal Ramchurn @ University of Southampton (Host)
Wan Sie Lee @ IMDA, Singapore
Stefano V. Albrecht @ DeepFlow London
Ramayya Krishnan @ Carnegie Mellon University
Mengyue Yang @ University of Bristol
We will open the floor to have a discussion about fundamental questions, future directions, and industrial applications of LLM-based multi-agent systems.
Professor @ University of Southampton
Shanghai Jiao Tong University
University of Southampton
The University of Texas at Austin
University of Southampton
Carnegie Mellon University
DeepFlow London
Shanghai Jiao Tong University
University of Liverpool
The University of Texas at Austin
Oral Papers
WebArbiter: A Generative Reasoning Process Reward Model for Web Agents
Proactive Interference Reveals Working Memory Limits in LLMs Beyond Context Length
Yes FLoReNce, I Will Do Better Next Time! Agentic Feedback Reasoning for Humorous Meme Detection
COMPASS: Context-Modulated PID Attention Steering System for Hallucination Mitigation
PALADIN: Self-Correcting Language Model Agents to Cure Tool-Failure Cases
Thucy: An LLM-based Multi-Agent System for Claim Verification across Relational Databases
DYNO : Dynamic Neurosymbolic Orchestrator for Multi-Agent Systems
Poster Papers
Black-Box Red Teaming of Agentic AI: A Taxonomy-Driven Framework for Automated Risk Discovery
AutoAnnotator: A Collaborative Annotation Framework for Large and Small Language Models
FAIR-Swarm: Fault-Tolerant Multi-Agent LLM Systems for Scientific Hypothesis Discovery
Self-evolving Agents with reflective and memory-augmented abilities
Sabotage from Within: Analyzing the Vulnerability of LLM Multi-Agent Systems to Infiltration
ARCANE: A Multi-Agent Framework for Interpretable and Configurable Alignment
Orchestrating Human-AI Teams: The Manager Agent as a Unifying Research Challenge
CoSMAC: A Benchmark for Evaluating Communication and Coordination in LLM-Based Agents
AgentTrace: A Structured Logging Framework for Agent System Observability