Software development is changing with collaborative AI agents that orchestrate full programming workflows. Unlike simple code completion, these systems must coordinate with each other and with human developers across planning, implementation, testing, debugging, and documentation. This workshop brings together researchers and practitioners to design, evaluate, and deploy AI teammates for real development.
We explore how AI agents collaborate in modern software engineering. We study single agent systems that work with humans, multi agent systems that coordinate among themselves, and hybrid approaches. We focus on interaction models, handoffs, workflow design, and on safeguards that preserve human agency while leveraging AI capability. We also ask how to make these systems reliable, auditable, and safe for production, and what verification and evaluation frameworks are needed.
Collaborative AI architectures for software development
Multi-agent coordination strategies
Human–AI workflow integration and handoff mechanisms
Preserving human expertise and agency in AI-assisted coding
Interaction design in IDEs, CLIs, and collaborative environments
Trust, reliability, and safety in collaborative coding agents
Verification, validation, and auditing frameworks
Evaluation methodologies and benchmarks for collaboration
User studies and empirical developer experience evaluations
Industrial deployments and case studies of AI coding agents
Applications in planning, implementation, debugging, and testing
Lessons from systems such as GitHub Copilot, Amazon Kiro, IBM Bob, Cursor AI, and Asimov
We welcome submissions from academia and industry. Submissions will be single-blind; author names and affiliations should be included in the manuscript.
Regular papers (up to six pages): Mature research with empirical results or theory.
Position papers (up to four pages): New perspectives, conceptual frameworks, emerging directions.
Extended abstracts (up to two pages): Early stage work, system demos, industry experiences.
Accepted papers will be presented as talks, spotlights, or posters. We expect at least one author to be in person for the presentation.
Submission site: https://cmt3.research.microsoft.com/CodeMates2026/
Submission due: October 29, 2025
Notification date: November 11, 2025
Proceedings date: January 27, 2026
Archival note: Although AAAI does not officially archive workshop proceedings, we will make all accepted papers publicly available and citable through the workshop website to ensure accessibility and permanence.
We explore how AI agents collaborate in modern software engineering. We study single agent systems that work with humans, multi agent systems that coordinate among themselves, and hybrid approaches. We focus on interaction models, handoffs, workflow design, and on safeguards that preserve human agency while leveraging AI capability. We also ask how to make these systems reliable, auditable, and safe for production, and what verification and evaluation frameworks are needed.
Collaborative AI architectures for software development
Multi-agent coordination strategies
Human–AI workflow integration and handoff mechanisms
Preserving human expertise and agency in AI-assisted coding
Interaction design in IDEs, CLIs, and collaborative environments
Trust, reliability, and safety in collaborative coding agents
Verification, validation, and auditing frameworks
Evaluation methodologies and benchmarks for collaboration
User studies and empirical developer experience evaluations
Industrial deployments and case studies of AI coding agents
Applications in planning, implementation, debugging, and testing
Lessons from systems such as GitHub Copilot, Amazon Kiro, IBM Bob, Cursor AI, and Asimov
We welcome submissions from academia and industry. Submissions will be single-blind; author names and affiliations should be included in the manuscript.
Regular papers (up to six pages): Mature research with empirical results or theory.
Position papers (up to four pages): New perspectives, conceptual frameworks, emerging directions.
Extended abstracts (up to two pages): Early stage work, system demos, industry experiences.
Accepted papers will be presented as talks, spotlights, or posters. We expect at least one author to be in person for the presentation.
Submission site: https://cmt3.research.microsoft.com/CodeMates2026/
Submission due: October 29, 2025
Notification date: November 11, 2025
Proceedings date: January 27, 2026
Archival note: Although AAAI does not officially archive workshop proceedings, we will make all accepted papers publicly available and citable through the workshop website to ensure accessibility and permanence.
We're proud to announce our confirmed speakers below.
Shengyu Fu
Partner Applied Science Manager, Microsoft CoreAI
From IntelliCode to Github Copilot: Human-Centered Coding Agents at Scale
(In-person talk)
Abhik Roychoudhury
Provost's Chair Professor
National University of Singapore
Agentic AI for Software:
Lessons in Trust
(In-person talk)
Baptiste Rozière
AI Scientist @ Mistral
(Leading Code Generation)
Code Assistants: from Code Completion to Coding Agents
(Remote talk)
Dalton Flanagan
Member of Technical Staff @ Anthropic
(Claude Code)
Claude Code: One Year Later
(In-person talk)
Any questions may be directed to the workshop orgnizers using the following email address: coding-agent-aaai26-organizers@googlegroups.com.
Senior Applied Scientist and Science Manager at AWS AI Labs
Senior Applied Scientist and Science Manager at AWS AI Labs
Post-doctoral Researcher
at Columbia University
Senior Applied Scientist
at AWS
Abhilasha Katariya (Amazon)
Ajay Yadav (Google)
Dalton Flanagan (Anthropic)
Feiyang Jin (Google)
Gabriel Ryan (Microsoft)
Ignacio Erazo (Amazon)
Jinyao Guo (Purdue University)
Keyur Muzumdar (Meta)
Pareesa Golnari (Microsoft)
Pramod Chunduri (AWS)
Ravishka Rathnasuriya (The University of Texas at Dallas)
Shahed Sorower (AWS)
Xiaoyu Liu (Microsoft)
Yuntong Zhang (National University of Singapore)
Zhou Xuan (Purdue University)
Penghui Li (Columbia University)
Mingwei Zheng (Purdue University)
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.