ML for Computer Architecture and Systems
(MLArchSys 2026)
ISCA 2026, Raleigh, USA
ISCA 2026, Raleigh, USA
Foundation models have catalyzed a new wave of machine learning innovation, with applications spanning from natural language understanding to image processing, protein folding, and beyond. The primary objective of this workshop is to unite the machine learning and systems communities to address the emerging architectural challenges posed by these massive models, and to drive their productive integration directly into the chip and system design processes.
This year, alongside our traditional focus areas, we are excited to introduce a new special segment: Aยณ (Agentic Approaches to Architecture), focusing on the next evolution of AI-driven methodologies using autonomous reasoning agents.
We invite researchers to submit short papers, position pieces, and early-stage work across the broad spectrum of ML and Systems. Submissions are encouraged across the following three tracks:
Focuses on designing, scaling, and evaluating the hardware and underlying systems needed to run massive AI workloads, including foundation models and AGI.
๐ System design for extremely large chain-of-thought-reasoning models
๐ Hardware accelerators for neurosymbolic and hybrid AI models
๐ Noisy hardware-efficient approximation (e.g. numerics and analog)
๐ Edge and Embodied AI
System and architecture support of foundational models and agentic workflows at scale
Efficient model compression (e.g. quantization, sparsity) techniques
Efficient, sustainable, and ethical accelerators and system design for AGI
Distributed systems and infrastructure design for AI workloads
Evaluation of deployed machine learning systems and architectures
Focuses on applying ML, AutoML, and optimization algorithms to improve hardware, compilers, and system resilience.
๐ Self-optimizing hardware using ML
๐ ML-driven resilient computing
Machine learning for hardware/software co-design (AutoML for Hardware)
Automated machine learning in EDA tools
Machine learning for compiler optimization and code generation
Optimized code generation for hardware and software
Focuses on the paradigm of autonomous agents that are capable of completing long-horizon tasks, using multi-step planning, algorithmic design, and tool manipulation in an iterative feedback-driven loop.
Productivity: Agents for accelerating hardware development and improving hardware design productivity
Autonomous System Design: Multi-agent systems for discovering novel hardware and software solutions for CPUs and accelerators, such as, branch predictors, prefetchers, scheduling, data partitioning and management etc.
Autonomous EDA & RTL Generation: Multi-agent systems for design, verification, and timing closure.
Dynamic Adaptation: Agents for self-evolving systems, such as, proactive power, thermal, and resource management, on-chip resource allocation, cache management, and DVFS.
Evaluation & Benchmarking: New metrics for evaluating progress of agentic workflows and foundational models
Simulation Environments: Simulation techniques for facilitating agentic exploration.
Verification: Agentic techniques for testing and verification.
Generative AI for security and vulnerability detection, design verification and testing
Areas: Computer Architecture, Systems, Compilers, Model Scaling, Security, Self-Attention, Foundational Models, EDA, Foundational Model Compression.
We are committed to fostering an inclusive and diverse environment for all participants. Our vision for this workshop is to build a diverse community and collectively work towards tackling challenges of foundational models. We recognize the value of diversity in promoting innovation, creativity, and meaningful discussions. Therefore, we have made significant efforts to ensure demographic diversity among our organizers and speakers. We acknowledge that achieving diversity is an ongoing process, and we continuously strive to improve our efforts in this regard. We encourage open feedback from our participants and the broader community to help us identify areas where we can enhance our inclusivity initiatives.
The use of LLMs is allowed as a general-purpose writing assist tool. Authors should understand that they take full responsibility for the contents of their papers, including content generated by LLMs that could be construed as plagiarism or scientific misconduct (e.g. fabrication of facts). LLMs are not eligible for authorship.
Authors have the right to withdraw papers from consideration at any time until paper notification. Before the paper submission deadline, if an author withdraws the appear it will be deleted from the OpenReview hosting site. However, note that after the paper submission deadline, if an author chooses to withdraw a submission, it will remain hosted by OpenReview in a publicly visible "withdrawn papers" section. Withdrawn papers will be de-anonymized.
Authors can change author order, but not add or remove authors. However, minor changes to titles and abstracts are allowed, if properly justified by the authors.
We welcome submissions of up to 4 pages (not including references). This is not a strict limit, but authors are encouraged to adhere to it if possible.
All submissions must be in PDF format and should follow the MLArchSys'26 Latex Template (Overleaf).
Please follow the guidelines provided at ISCA 2026 Paper Submission Guidelines.
Please submit your paper at OpenReview (TBD). ย ย While the review process is not public, we make the accepted papers and their reviews public after the notification deadline.ย
Please carefully read and understand the MLArchSys 2026 Paper Checklist Guidelines (Same as 2025).
Reviewing will be double blind: please do not include any author names on any submitted documents except in the space provided on the submission form.
We welcome submissions that include parts of ongoing work intended for a future conference submission; however, please ensure that your submitted work has not been previously published at a conference or in a journal.
Bahar Asgari (UMD)
Thaleia Dimitra Doudali (IMDEA Software)
Qijing Huang (NVIDIA)
Akanksha Jain (Google)
Geonhwa Jeong (Meta)
Tom St. John (Gimlet Labs)
Priya Panda (USC)
Santosh Pandey (USF)
Suvinay Subramanian (Google)
Neeraja J. Yadwadkar (University of Texas, Austin)
Amir Yazdanbakhsh (Google DeepMind)
Full Paper Submission Deadline: May 1st, 2026 (TBD)
Author Notification: May 22nd, 2026.
Workshop: June 28, 2026 (Raleigh, USA).
Contact us at mlarchsys@gmail.com