University of Pennsylvania
CoRL 2024
Abstract
Good pre-trained visual representations could enable robots to learn visuomotor policy efficiently. Still, existing representations take a one-size-fits-all-tasks approach that comes with two important drawbacks: (1) Being completely task-agnostic, these representations cannot effectively ignore any task-irrelevant information in the scene, and (2) They often lack the representational capacity to handle unconstrained/complex real-world scenes. Instead, we propose to train a large combinatorial family of representations organized by scene entities: objects and object parts. This hierarchical object decomposition for task-oriented representations (HODOR) permits selectively assembling different representations specific to each task while scaling in representational capacity with the complexity of the scene and the task. In our experiments, we find that HODOR outperforms prior pre-trained representations, both scene vector representations and object-centric representations, for sample-efficient imitation learning across 5 simulated and 5 real-world manipulation tasks. We further find that the invariances captured in HODOR are inherited into downstream policies, which can robustly generalize to out-of-distribution test conditions, permitting zero-shot skill chaining.
Overview
Method-Inferring HODOR Representations From Foundation Models
Given task descriptions, HODOR (hierarchical object decomposition for task-oriented representations) identifies task-relevant objects and their parts, subsequently building three layers of slots in coarse-to-fine order: the full scene, task-relevant objects, and their corresponding object parts.
Method-Policy Architecture and Training
For the policy to flexibly process this representation, we use a transformer-based policy architecture. We feed HODOR representations into the self-attention layer, which naturally processes variable-sized inputs as well as unordered sets without any special modifications. Each individual slot at each level of HODOR is represented as an input token. To represent the level-wise ordering, we attach a level embedding to the slot vectors. Similar to positional encodings in common transformer architectures, these level embeddings are learnable tokens that are designed to encode hierarchical graph information.
Results-FrankaKitchen Experiments
In order to evaluate the performance of our hierarchical scene representations, we first test our method and baselines on five simulated Franka Kitchen tasks. Comparing against four prior pre-trained image representations for robots, HODOR outperforms all these methods with nearly all demonstration set sizes on all tasks besides OpenCabinetDoor, with higher average performance and lower standard errors (shaded region) throughout.
Results-Real Robot Experiments