Lifelong Robot Learning: Generalization, Adaptation, and Deployment with Large Models
Workshop @ RSS 2024
Abstract
In recent years, we have witnessed the tremendous success of large models (so-called foundation models) in computer vision and natural language processing (NLP). Hence, there has been growing interest in repurposing existing large models or learning new ones for robotics problems. However, different from vision and language domains, there is no readily available internet-scale data for robotics to train such a model. As a result, an alternative is to train the large models for robotics in a lifelong fashion, where robots collect data during deployment, and refine the model on the fly. While the concept and formulation of lifelong learning originated from robotics, it has been mostly studied in vision and NLP. Now, in the presence of large models, it is the right time to revisit the lifelong robot learning problems in a scalable manner.
Discussion Topics
In this workshop, our objective is to unite participants to explore and envision the future paradigm of robot learning and deployment, emphasizing the continuous adaptation of robots to novel scenarios in the era of large models. This proposed workshop is intended for audience of backgrounds in robot learning, foundation models for robots, and lifelong learning in decision-making. We invite speakers and presenters from these aforementioned subfields, aiming to inspire meaningful discussions of generalization, adaptation, and deployment of lifelong robot learning methods, given the availability of large models.
We would like to gather people who are interested in generalist agents, lifelong learning, and robotics to discuss the following topics:
Is lifelong learning critical in building robots that can be deployed in the real world? Is large-scale multi-task learning enough for robotics (just like large language models that possess the emergent zero-shot ability)?
How should robots adapt to personal usage? How can robots continually learn from their users while maintaining privacy?
Will the large foundation model for robotics consist of a set of foundation models specialized in perception, control, and language, or will it be a single end-to-end large model? In other words, will it be more likely to be a compositional model or a single giant transformer-like model?
What properties an ideal lifelong learning algorithm should possess for efficiently adapting robotics models?
Do we need paradigm shifts in neural architectures of large models to support lifelong learning?
Given the discussion above, what type of benchmark problems should researchers work on? What metrics should we focus on?
How to leverage large foundation models to facilitate lifelong robot learning? What are the new opportunities and challenges of lifelong robot learning in the era of large models?
Call for Contributions
Topics:
This workshop accepts contributions on topics including but not limited to:
Lifelong / Continual robot learning
Adaptation and generalization
Large foundation models for robotics
Robot foundation models
Continual reinforcement learning
Learning from demonstrations
Skill learning and skill transfer
Personalized human-robot interaction
Important Dates:
Paper Submission Deadline: June 8th, 2024
Notification: June, 21st, 2024
Camera-Ready Deadline: July 5th, 2024
Workshop Date: July 19th, 2024
Submission Guidelines:
We invite all types of submissions, focusing on lifelong robot learning in the era of large foundation models. We welcome papers already accepted by other related venues or summarizing ongoing research.
Page limits and templates: There is no page limit for the submission. Any conference or journal templates are acceptable, but we highly recommend using the RSS template (Latex and Word) when preparing the submission.
Review process and presentation logistics: All submissions should be anonymized. The review process is double-blinded. We will only release the accepted papers and their reviews. Accepted papers will be presented in a poster session, and each spotlight paper will have a 5-minute lighting talk.
Submission Link: link
Call for Reviewers:
Please fill out this form if you would like to be a reviewer. Thank you for your support.
Schedule (Tentative)
Invited Speakers
Tamin Asfour. Professor at KIT.
Talk title: Incremental and Affordance-based learning from Interaction and Large Language Models
Beomjoon Kim. Professor at KAIST.
Talk title: Making robots see and manipulate
Jie Tan. Tech Lead Manager at Google Deepmind.
Talk title: Robot lifelong learning via human robot interaction
Matthew Gomblay. Professor at Georgia Institute of Technology.
Jim Fan. Research Scientist at NVIDIA (US).
Organizers
Yifeng Zhu
UT Austin
Mengdi Xu
CMU
Shiqi Zhang
SUNY Binghamton
Bo Liu
UT Austin