Agent Learning in Open-Endedness (ALOE)
ICLR 2022 Workshop
CALL FOR PAPERS
Rapid progress in AI has produced agents that learn to succeed in many challenging domains. However, once this domain has been mastered, the learning process typically ends. In contrast, the real world is an open-ended ecosystem of problems and solutions, presenting endless, novel challenges, which in turn shape the evolution of humans and other living organisms that must continually solve them for survival. While so far no artificial learning algorithm has produced an intelligence as general as humans, we know that we ourselves resulted from this open-ended process of coevolution among living organisms and the environment. This discrepancy suggests that we might achieve a major ramp-up in the capabilities of our AI agents through similar open-ended co-evolving processes among agents and their environments. Such an open-ended process may result in agents capable of solving an unboundedly large set of challenges, including surprising emergent scenarios that might not have been explicitly considered when designing the learning system—leading to improved performance in important settings such as sim2real.
This workshop aims to unite researchers across many relevant fields including reinforcement learning, continual learning, evolutionary computation, and artificial life in pursuit of creating open-ended learning systems. We hope our workshop provides a forum both for bridging knowledge across fields as well as sparking new insights that can enable agent learning in open-endedness. Specifically, we are interested in cultivating new research that addresses the following key questions:
How can we formalize the notion of "open-endedness," thereby providing a desiderata for achieving a truly open-ended learning process, as well as metrics for measuring its progress?
What additional metrics do we need in order to understand and control the emergent properties of environments, tasks, and agents produced under open-ended learning?
How can we produce agents that continue to explore and represent knowledge about a world with unboundedly rich states and dynamics?
How can we devise training algorithms with provable guarantees in how well agents will generalize in open-ended environments?
How can we take advantage of substructures in open-ended environments to efficiently train agents, for example, through adaptive curricula?
We invite authors to submit papers focused on these and other challenges of learning in open-ended environments. Papers can be up to 8 pages, excluding references and appendices, in the ICLR 2022 format. In particular, we encourage submissions related to open-endedness in the following areas:
Benchmarks for open-endedness
Scalable, open-ended environments and simulations
Curriculum learning / unsupervised environment design
Self-supervised reinforcement learning
Multi-agent / population-based / co-evolutionary methods
Real-world applications of open-ended learning systems
The full details of the submission and review process are provided here.
11:59 PM, March 4 , 2022 ( AoE )
March 2 8 , 2022
Camera-ready submission deadline:
April 25 , 2022
ALOE 2022 workshop dates:
April 29, 2022
Philip J. Ball
Simon C. Smith
Join the ALOE community on Slack to ask questions, get updates, and exchange ideas.