Friday, May 7th 2021

Learning to Learn Workshop


  • To attend our workshop please access our ICLR workshop page with livestream here !

  • Join our Poster Session on, the link to our town is on our ICLR workshop page!

  • For active participation please join our zoom webinar (also from our ICLR workshop page) during the panel discussion

  • To ask questions use our RocketChat on our ICLR workshop page!

The goal of the workshop is to bring together scientists from different backgrounds to further the development of learning-to-learn algorithms


Recent years have seen a lot of interest in the use and development of learning-to-learn algorithms. Research on learning-to-learn, or meta-learning, algorithms is often motivated by the hope to learn representations that can be easily transferred to the learning of new skills, and lead to faster learning. Yet, current meta-learned representations often struggle to generalize to novel task settings. In this workshop, we’d like to discuss how humans meta-learn, and what we can and should expect from learning-to-learn in the field of machine learning.

In this context, our aim is to bring together researchers from a variety of backgrounds with the hope to discuss what learning to learn means from a cognitive perspective, and how this knowledge might translate into algorithmic advances. In particular we are interested in creating a platform to enable the exchange between the fields of neuroscience and machine learning. With this goal in mind, we created a list of speakers that reflects this variety in backgrounds and schools of thought to create an opportunity to reflect upon questions that we believe are important to further advance the field of meta learning in the machine learning community.

We believe that it is an important moment for the machine learning community to reflect upon these questions in order to advance the field and increase its variety in approaching learning to learn. We hope that by fostering discussions between cognitive science and machine learning researchers, we enable both sides to draw inspiration to further the understanding and development of learning-to-learn algorithms.

Concretely we are interested in the following questions:

  1. What do we know about how humans learn to learn? How much of this knowledge has already been realized in intelligent systems and what remains to be explored?

  2. What should be our expectation of meta learning in intelligent systems? Are we currently evaluating the performance of meta learning in a meaningful way or are our learning problems and evaluations ill-posed?

  3. Related to 2), how does meta learning relate to the “no free lunch” theorem, and how do we keep this in mind when implementing learning to learn approaches in intelligent systems.

  4. How can we meta-learn in a lifelong learning setting? Should we meta-learn how to learn continuously or should we continuously meta-learn, or both? How do humans do it?

  5. We also solicit submissions on negative results on meta learning that help us understand the current limitations and boundaries of learning to learn

Important Dates and Submission instructions

Camera Ready and Presentation Instructions:

Camera-Ready Deadline: April 30th 2021

  • Please use this style file for your camera ready submission and upload the camera ready to openreview.

  • Prepare one slide presenting your work for the lightning talk. The talk shoud be max 3 minutes long. Add your slide to the workshop slide deck (link is in email to authors) by April 30th 2021. Please be aware that you will not be able to 'click forward' in your presentation so do not add animations etc.

  • Prepare a poster for the virtual poster session that will take place in, send your poster to by April 30th 2021.

Submission Site:

Submission Deadline: February 26th 2021 Deadline extended to March 12th 2021!

Notification: March 26th 2021

Camera Ready Submission: tba

Workshop: May 7th, 2021

for formatting instructions please refer to

papers should be 2 to 4 pages long excluding references. We accept work that has been submitted but not yet published at another conference.

To ensure high review quality, we ask all submissions to provide two contacts that have agreed to review for the workshop.

Invited Speakers

is a research scientist at DeepMind, her research is at the intersection of machine learning and computational cognitive science.

is a research scientist at DeepMind. In her research she is interested in applying neuroscience principles to inspire new algorithms for artificial intelligence and machine learning

is Professor of CS and Neuroscience at UT Austin. His main focus areas are: Neuroevolution, Cognitive Science and Computational Neuroscience.

is a Ph.D. candidate at UC Berkeley, interested in exploring meta-learning as a tool in cognitive modelling and building cognitively-inspired models of meta- and continual learning

is a research scientist at FAIR, London. His research interests include meta and lifelong learning for a variety of applications.

is a Neurobiology professor at Stanford, her laboratory studies the neural mechanisms of learning. Toward this goal, she use a combination of behavioral, neurophysiological and computational approaches.

Schedule and virtual format

ALL Times Pacific Time (PT)

7.00am - 7.10am Introduction and opening remarks
7.10am - 7.40am Invited Speaker: Ishita Dasgupta
7.40am - 8.10am Invited Speaker: Jane Wang
8.10am - 8.40am Invited Speaker: Jennifer Raymond

8.40am - 9.15am Lighting Talks Contributed Papers
9.15am - 10.00am Poster Session

10.00am - 10.30am Invited Speaker: Edward Grefenstette
10.30am - 11.00am Invited Speaker: Risto Miikkulainen
11.00am - 11.30am Invited Speaker: Erin Grant

11.30am -12.45 pm Q&A + Zoom panel


Offline Meta Learning of Exploration,

Ron Dorfman, Idan Shenfeld, Aviv Tamar [paper]

Few-Shot learning with weak supervision,

Ali Ghadirzadeh, Petra Poklukar, Xi Chen, Huaxiu Yao, Hossein Azizpour, Mårten Björkman, Chelsea Finn, Danica Kragic [paper]

Meta-learning using privileged information for dynamics,

Ben Day, Alexander Luke Ian Norcliffe, Jacob Moss, Pietro Liò [paper]

The Emergence of Abstract and Episodic Neurons in Episodic Meta-RL,

Badr AlKhamissi, Muhammad ElNokrashy, Michael Spranger [paper]

Exploring the Similarity of Representations in Model-Agnostic Meta-Learning,

Thomas Goerttler, Klaus Obermayer [paper]

How Sensitive are Meta-Learners to Dataset Imbalance?,

Mateusz Ochal, Massimiliano Patacchiola, Jose Manuel Vazquez diosdado, Amos Storkey, Sen Wang [paper]

Meta Learning for Multi-agent Communication,

Abhinav Gupta, Angeliki Lazaridou, Marc Lanctot [paper]

Learning where to learn,

Dominic Zhao, Nicolas Zucchet, Joao Sacramento, Johannes Von Oswald [paper]

Compositionality as Learning Bias in Generative RNNs solves the Omniglot Challenge,

Sarah Fabi, Sebastian Otte, Martin V. Butz [paper]

Meta-Learning for Planning: Automatic Synthesis of Sample Based Planners,

Lucas Paul Saldyt, Heni Amor [paper]

Accelerating Online Reinforcement Learning via Model-Based Meta-Learning,

John D Co-Reyes, Sarah Feng, Glen Berseth, Jie Qui, Sergey Levine [paper]

Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment,

Michael Chang, Sidhant Kaushik, Thomas L. Griffiths, Sergey Levine [paper]


Sarah Bechtle
MPI for Intelligent Systems

Timothy Hospedales
University of Edinburgh & Samsung AI Research

Todor Davchev
University of Edinburgh

Franziska Meier
Facebook AI Research

Yevgen Chebotar
Google Brain

For questions please contact us at