2nd workshop on
Pre-Training for Robot Learning
Workshop at CoRL 2023 - November 6th - Atlanta, USA
Time: 9:00am - 5:00pm
Workshop Location: Hub 4 | Poster Location: Muse 1
Workshop Goals
Despite recent advancements in large-scale machine learning, there remains a significant gap between humans and robots, in terms of capabilities as well as learning efficiency. While humans can learn a diverse suite of sensory-motor tasks using just a few examples or trials, current robot learning systems require massive amounts of data or supervision to learn even a single task. A critical component to human learning efficiency is the ability to effectively draw upon and reuse any historical experience. Within computer vision and NLP communities, this realization has led to a wave of research firmly establishing that pre-training from diverse datasets is vital for high performance and data efficiency on downstream tasks. Through this workshop, we hope to bring together the robotics and learning communities, and discuss the role pre-training will play in robotics.
In this workshop, we will discuss pre-training at various levels in a robotics pipeline – from perception, to sensory-motor loops, to high-level reasoning modules with LLMs. A few example questions of interest include:
Which components can be pre-trained effectively in a robot learning system?
How can we utilize different data sources (e.g. sim, real-robot, human videos) for pre-training? Do they provide complementary benefits?
What is the scale of data needed for effective pre-training?
How can we test the effectiveness of pre-trained models in robotics? Benchmarking, evaluation protocols, pipelines etc.
What level of performance and/or efficiency gain can we expect from pre-training?
Speakers
Chelsea Finn
Stanford University
Kristen Grauman
UT Austin + Meta AI
Vincent Vanhoucke
Google DeepMind
Dhruv Batra
Gatech + Meta AI
Organizers
Aravind Rajeswaran
FAIR, Meta AI
Arjun Majumdar
Georgia Tech
Stephen James
Dyson Robot Learning Lab
Younggyo Seo
Dyson Robot Learning Lab
Franziska Meier
FAIR, Meta AI
Andy Zeng
Google DeepMind