Workshop on
Pre-training Robot Learning
Workshop at CoRL 2022 - December 15th Auckland, NZ time - Hybrid Format
In-person Location : OGG, Building 260, Room 098
Submit Questions through Pheedloop Chat (Requires CoRL registration)
Posters: Fisher & Paykel Appliances Auditorium (level 1 of OGG)
Notes to Participants
Initially, the wrong zoom link was shared with participants. Please see latest email for updated and corrected zoom link.
All presentations will happen over zoom. In-person participants will walk up to the podium but still share screen via zoom instead of connecting laptop through HDMI. This is to avoid any AV delays. Presenters will receive zoom link via email.
Workshop Goals
A major gap between humans and robots is learning efficiency. While humans can learn a diverse suite of tasks using just a few examples or trials, current robot learning systems require massive amounts of data or supervision to learn even a single task. A critical component to human learning efficiency is the ability to effectively draw upon and reuse any historical experience. Within computer vision and NLP communities, this realization has led to a wave of research firmly establishing that pre-training from diverse datasets is vital for high performance and data efficiency on downstream tasks. Through this workshop, we hope to bring together the robotics and learning communities, and discuss the role pre-training will play in robotics.
The following questions and topics will be discussed in the workshop:
What components can be pre-trained in robot learning systems?
What level of performance and/or efficiency gain can we expect from pre-training?
Can we leverage offline in-the-wild data for downstream control tasks?
Pre-training from simulation vs real-world datasets?
How much data is required for effective pre-training?
Are learned representations from videos better than those from images?
How can we best utilize action-free datasets for pre-training?
Which of the following will lead to the best form of pre-training for robotics: multi-task learning, unsupervised representation learning, meta-learning, skill discovery, or something else entirely?
Building robots on pre-trained models can inherit their capabilities, but also biases as well. How can we mitigate these risks?
Call for Papers!
We invite papers in the areas highlighted above. Submission instructions here.
Call for papers now closed. Reviews returned soon!
The Speakers
Jitendra Malik
UC Berkeley
Chelsea Finn
Stanford
Joseph Lim
KAIST
Abhinav Gupta
CMU
Kristen Graumen
UT Austin