Friday, May 7th 2021
Learning to Learn Workshop
virtually at ICLR
The goal of the workshop is to bring together scientists from different backgrounds to further the development of learning-to-learn algorithms
WORKSHOP ABSTRACT AND CALL FOR CONTRIBUTION
Recent years have seen a lot of interest in the use and development of learning-to-learn algorithms. Research on learning-to-learn, or meta-learning, algorithms is often motivated by the hope to learn representations that can be easily transferred to the learning of new skills, and lead to faster learning. Yet, current meta-learned representations often struggle to generalize to novel task settings. In this workshop, we’d like to discuss how humans meta-learn, and what we can and should expect from learning-to-learn in the field of machine learning.
In this context, our aim is to bring together researchers from a variety of backgrounds with the hope to discuss what learning to learn means from a cognitive perspective, and how this knowledge might translate into algorithmic advances. In particular we are interested in creating a platform to enable the exchange between the fields of neuroscience and machine learning. With this goal in mind, we created a list of speakers that reflects this variety in backgrounds and schools of thought to create an opportunity to reflect upon questions that we believe are important to further advance the field of meta learning in the machine learning community.
We believe that it is an important moment for the machine learning community to reflect upon these questions in order to advance the field and increase its variety in approaching learning to learn. We hope that by fostering discussions between cognitive science and machine learning researchers, we enable both sides to draw inspiration to further the understanding and development of learning-to-learn algorithms.
Important Dates and Submission instructions
is a research scientist at DeepMind, her research is at the intersection of machine learning and computational cognitive science.
is a research scientist at DeepMind. In her research she is interested in applying neuroscience principles to inspire new algorithms for artificial intelligence and machine learning
is Professor of CS and Neuroscience at UT Austin. His main focus areas are: Neuroevolution, Cognitive Science and Computational Neuroscience.
is a Ph.D. candidate at UC Berkeley, interested in exploring meta-learning as a tool in cognitive modelling and building cognitively-inspired models of meta- and continual learning
is a research scientist at FAIR, London. His research interests include meta and lifelong learning for a variety of applications.
is a Neurobiology professor at Stanford, her laboratory studies the neural mechanisms of learning. Toward this goal, she use a combination of behavioral, neurophysiological and computational approaches.
Schedule and virtual format
For questions please contact us at email@example.com