Learning to Learn for Robotics
Workshop at ICRA 2021
June 4th 2021
The workshop aim is to provide an informative overview of the existing challenges in using meta learning for robotics and set the grounds for future development.
WORKSHOP ABSTRACT AND CALL FOR CONTRIBUTION
Recent years have seen a lot of interest in the development and use of learning-to-learn for a wide range of applications. Yet, existing solutions rarely take into account the structured nature of a learner’s body which constrains their applicability to learning in physical environments, an aspect which is core in robotics.
In this workshop, we’d like to discuss how humans use their own embodied structure to learn effectively and efficiently, and what we can do to enable robotic learn-to-learn in a similar fashion. Our goal is to bring together researchers from a variety of backgrounds with the hope to discuss and reason about what learning-to-learn and embodiment means from a cognitive neuroscience perspective, and how this knowledge might translate into improving robot learning.
Concretely we are interested in the following questions:
What can we learn from humans utilising their bodies in the process of learning-to-learn;
How to make use of the embodied structure of a robot to improve or self-supervise the process of meta-learning; do we need new algorithms for embodied meta-learning?
Do people learn-to-learn different concepts separately or do we learn continuously throughout our lifespans; how do we make sense of the surrounding environment in this process?
How can we ensure learning-to-learn is safe for an embodied robot and its environment?
We also solicit submissions on negative results on meta learning that help us understand the current limitations and boundaries of learning to learn
Important Dates and Submission instructions
Submission Form: https://forms.gle/Y6yMiSY3tc3wmF969
Submission Deadline: May 15th 2021 Deadline extended to May 28th 2021
Notification: May 31st 2021
Camera Ready Submission: tba
Workshop: June 4 2021
for formatting instructions please refer to https://www.ieee.org/conferences/publishing/templates.html
papers should be maximum 3 pages long excluding references. We accept work that has been submitted but not yet published at another conference.
The Plan
Our plan is to collect novel contributions that showcase how lifelong and meta-learning strategies are applied on physical robotic tasks. We welcome submissions on but not restricted to the following topics:
Topical Areas
Data-efficiency via transfer/multitask/meta learning for real world tasks
Embodied learning-to-learn
Robot learning through embodied self-supervision
Online / fast adaptation to changing dynamics models and other sources of covariate shift
Domain/Policy adaptation between (for example) different robotic platforms, collaborative settings, varying environments and tasks
Standardised frameworks, metrics for physical evaluation etc. for learning to learn algorithms
Causality in the context of Learning to learn for Robotics.
Learning to learn with physical safety for the robot and its environment.
Social learning-to-learn by imitating other agents’ learning strategies.
We seek to collect interesting ideas and applications addressing problems across these topics which can provide an informative overview of the existing challenges in using meta learning approaches to robotics applications and can set the grounds for future development.
Invited speakers
is Research Director at Inria, France. He studies lifelong autonomous learning, and the self-organization of behavioural, cognitive and language structures.
is Assistant Professor at CTU Prague. He uses humanoid robots with electronic skin to uncover the mechanisms of how babies learn about their body.
is an Assistant Professor in EECS at Stanford. She studies the capability of robots to develop intelligent behavior through learning and interaction.
is a Senior Staff Research Scientist at DeepMind. He is currently investigating a broad range of fundamental areas in artificial intelligence and machine learning, and their applications.
is an Assistant Professor in EECS at MIT. His interests are in building machines that can automatically and continuously learn about their environment.
is the Director of Monash Robotics and Professor at Monash University. Her interests are in autonomous systems that can operate in concert with humans, using natural and intuitive interaction strategies while learning from user feedback.
Schedule and virtual format
New York: 08:50 - 09:00 - (London: 13:50 - 14:00) Introduction and opening remarks
New York: 09:00 - 09:35 - (London: 14:00 - 14:35) Pulkit Agrawal - Rethinking learning in robot learning
New York: 09:35 - 10:10 - (London: 14:35 - 15:10) Matej Hoffmann - Learning body models: from humans to humanoids
New York: 10:10 - 10:45 - (London: 15:10 - 15:45) Pierre-Yves Oudeyer - Autotelic Deep RL Agents: Learning to self-supervise for autonomous development
New York: 10:45 - 10:55 - (London: 15:45 - 15:55) (paper presentation) Generalising Deformable Object Manipulation by Learning to Learn from Demonstrations [pdf]
New York: 10:55 - 11:05 - (London: 15:55 - 16:05) (paper presentation) Learning Autonomous Flight from Human Drone Racing Pilots [pdf]
New York: 11:05 - 11:15 - (London: 16:05 - 16:15) (paper presentation) Curiosity-driven Intuitive Physics Learning [pdf]
New York: 11:15 - 11:45 - (London: 16:15 - 16:45) Coffee Break
New York: 11:45 - 12:20 - (London: 16:45 - 17:20) Chelsea Finn - Learning Exploration Strategies with Meta Reinforcement Learning
New York: 12:20 - 12:55 - (London: 17:20 - 17:55) Simon Osindero - Learning to Learn for Robotics
New York: 12:55 - 13:30 - (London: 17:55 - 18:30) Dana Kulic - Learning from Human Robot Interaction
New York: 13:30 - 14:30 - (London: 18:30 - 19:30) Panel
New York: 14:30 - 14:40 - (London: 19:30 - 19:40) Closing Remarks
Video Recording
organisers
Sarah Bechtle
MPI for Intelligent Systems
Timothy Hospedales
University of Edinburgh & Samsung AI Research
Yevgen Chebotar
Google Brain
For questions please contact us at learningtolearn.icra2021@gmail.com