Software development is changing with collaborative AI agents that orchestrate full programming workflows. Unlike simple code completion, these systems must coordinate with each other and with human developers across planning, implementation, testing, debugging, and documentation. This workshop brings together researchers and practitioners to design, evaluate, and deploy AI teammates for real development.
We explore how AI agents collaborate in modern software engineering. We study single agent systems that work with humans, multi agent systems that coordinate among themselves, and hybrid approaches. We focus on interaction models, handoffs, workflow design, and on safeguards that preserve human agency while leveraging AI capability. We also ask how to make these systems reliable, auditable, and safe for production, and what verification and evaluation frameworks are needed.
Collaborative AI architectures for software development
Multi-agent coordination strategies
Human–AI workflow integration and handoff mechanisms
Preserving human expertise and agency in AI-assisted coding
Interaction design in IDEs, CLIs, and collaborative environments
Trust, reliability, and safety in collaborative coding agents
Verification, validation, and auditing frameworks
Evaluation methodologies and benchmarks for collaboration
User studies and empirical developer experience evaluations
Industrial deployments and case studies of AI coding agents
Applications in planning, implementation, debugging, and testing
Lessons from systems such as GitHub Copilot, Amazon Kiro, Cursor AI, and Asimov
We welcome submissions from academia and industry.
Regular papers, up to six pages: Mature research with empirical results or theory.
Position papers, up to four pages: New perspectives, conceptual frameworks, emerging directions.
Extended abstracts, up to two pages: Early stage work, system demos, industry experiences.
Accepted papers will be presented as talks, spotlights, or posters.
Attendance plan: At least one organizer will attend AAAI-26 in person to ensure on site representation and engagement.
Submission site: https://cmt3.research.microsoft.com/CodeMates2026/
Submission due: Oct 22, 2025
Proceedings: Tuesday, January 27, 2026
TBD
Shweta Garg
AWS AI Labs
Behrooz Omidvar-Tehrani
Microsoft
Sewon Min
UC Berkeley
Allen Institute for AI
Baishakhi Ray Columbia University
Simin Chen
Columbia University
The Microsoft CMT service was used for managing the peer-reviewing process for this conference. This service was provided for free by Microsoft and they bore all expenses, including costs for Azure cloud services as well as for software development and support.