SOFT: Recasting Generic Pretrained Vision Transformers As Object-Centric Scene Encoders For Manipulation Policies

University of of Pennsylvania

ICRA 2024

Paper     Arxiv(Coming Soon)     Supplementary Materials    Code (Coming Soon)

Overview

Scene Objects From Transformers, abbreviated as SOFT(·), is a wrapper around pre-trained vision transformer models to bridge the gap between pre-trained vision transformer and pre-trained robotics-specific image encoders on robotics tasks without any further training.

Abstract

Generic re-usable pre-trained image representation encoders have become a standard component of methods for many computer vision tasks. As visual representations for robots however, their utility has been limited, leading to a recent wave of efforts to pre-train robotics-specific image encoders that are better suited to robotic tasks than their generic counterparts. We propose SOFT, a wrapper around pre-trained vision transformer (PVT) models that bridges this gap without any further training. Rather than construct representations out of only the final layer activations, SOFT individuates and locates object-like entities from PVT attentions, and describes them with PVT activations, producing an object-centric representation.  Across standard choices of generic pre-trained vision transformers PVT, we demonstrate in each case that policies trained on SOFT(PVT) far outstrip standard PVT representations for manipulation tasks in simulated and real settings, approaching the state-of-the-art robotics-aware representations. 

Video Summary

SOFT-ICRA2024.mov