ICLR 2026 Workshop MemAgents
Workshop on Memory for LLM-Based Agentic Systems
Workshop on Memory for LLM-Based Agentic Systems
Date: April 26 or 27, 2026
Location: Rio de Janeiro, Brazil (Hybrid)
Agentic systems are increasingly deployed in high-stakes settings like robotics, autonomous web interaction, and software maintenance. Their success hinges on memory, i.e., how they encode, retain, retrieve, and consolidate experience into knowledge for future decisions.
While LLM memorization typically refers to static in-weights retention, agent memory encompasses online, interaction-driven memory that is under the agent's specific control. This evolution highlights that agent capabilities hinge not only on raw model power, but also on the ability to write policies and assign temporal credit across episodes. MemAgents is a focused forum that explores foundational memory architectures and representations, such as episodic, semantic, and working memory, as well as their interfaces with external stores and parametric knowledge. Building from these foundational topics, the workshop will also discuss the broader concept of the memory layer that underwrites agent behavior across domains, including software tools, embodied tasks, and multi-agent settings. We bridge three perspectives:
Memory Architectures: Episodic, semantic, working, and parametric memory.
Systems & Evaluation: Data structures, retrieval pipelines, and long-horizon benchmarks.
Neuroscience-Inspired Memory: Complementary learning systems and hippocampal-cortical consolidation as design inspiration.
We invite researchers across generative AI, reinforcement learning, cognitive psychology, and neuroscience to share insights on the construction, analysis, and applications of the memory layer of agents, such as Episodic and Semantic Memory, Working Memory, Parametric Knowledge, Knowledge Graphs, Vector Databases, Retrieval Pipelines, Context Management, Long-Context Utilization and Temporal Credit Assignment. We also encourage submissions from Neuroscience and Cognitive Science (e.g., human memory limitations, bias, abstraction) related to agent memory design. In summary, topics of interest mainly include, but are not limited to:
Memory Architectures and Mechanisms: Exploring designs for long-term stores, retrieval and scheduling pipelines, context management (i.e., chunking/summarization), and interfaces between working memory and external stores to support planning and decision-making.
Explicit vs. In-Weights Memory in LLM Agents: Investigating the interplay between symbolic/textual explicit memories and parametric memory, including mechanisms for conversion, editing, and balancing these stores while preserving safety and provenance.
Memory Dynamics and Lifelong Learning: Analyzing how agents consolidate transient experiences into lasting knowledge, manage forgetting to avoid overload, and handle temporal credit assignment across sequential tasks.
Neuroscience-Inspired and Human-Centric Memory: Translating biological principles (e.g., complementary learning systems, consolidation) into agent mechanisms, and comparing agent recall biases and abstraction capabilities to human memory.
Benchmarks, Datasets, and Evaluation: Developing frameworks to evaluate long-horizon competence, standardized metrics for memory usage and forgetting, and methods to distinguish true memory use from shortcut exploitation.
Submission Tracks
Full-Length Papers (Main Track): Up to 9 pages (excluding references and supplementary materials).
Short Papers (Main Track): Up to 4 pages (excluding references and supplementary materials).
Tiny Paper Track: Limited to 2 pages. Succeeding the ICLR 2024 track, we invite submissions regarding emerging research, late-breaking results, opinion pieces, or novel, concise insights and interdisciplinary ideas.
Important Dates (11:59pm AoE)
Call for Papers Released: January 11th, 2026
Workshop Paper Submission: January 11th - February 5th, 2026, 11:59 pm AoE February 13th, 2026, 11:59 pm AoE (Deadline Extended)
Notification of Acceptance: March 1st, 2026
Style Files and Templates
To prepare your submission to MemAgents 2026, please use the ICLR 2026 template provided at:
https://github.com/ICLR/Master-Template/raw/master/iclr2026.zip
Submission Site
We use OpenReview to manage the submission and review workflow, ensuring a rigorous double-blind process. To effectively optimize reviewer matching, all listed authors must maintain an OpenReview profile which reflects current and past institutional affiliations and include links to relevant professional pages such as Google Scholar, DBLP, ORCID, LinkedIn, and Semantic Scholar. Please note that submissions will remain confidential and will not be made public on OpenReview during the reviewing period.
Abstracts and papers can be submitted through the OpenReview platform: OpenReview Submission Site.