Human-oriented Representation Learning for Robotic Manipulation

Abstract

Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks. We advocate that such a representation automatically arises from simultaneously learning about multiple simple perceptual skills that are critical for everyday scenarios (e.g., hand detection, state estimate, etc.) and is better suited for learning robot manipulation policies compared to current state-of-the-art visual representations purely based on self-supervised objectives. We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders, where each task is a perceptual skill tied to human-environment interactions. We introduce Task Fusion Decoder as a plug-and-play embedding translator that utilizes the underlying relationships among these perceptual skills to guide the representation learning towards encoding meaningful structure for what’s important for all perceptual skills, ultimately empowering learning of downstream robotic manipulation tasks. Extensive experiments across a range of robotic tasks and embodiments, in both simulations and real-world environments, show that our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders including R3M, MVP, and EgoVLP, for downstream manipulation policy-learning.

Our Motivation

Left: human-oriented representation learning as a multi-task learner.  Right: robots leverage the human-oriented representation to learn various manipulation tasks.

Framework

The whole framework is shown as follows. The task fusion decoder includes cross-attention and self-attention, can adjust the video encoder representation, and fuse different task information.

Simulator Demos:

Franka Kitchen:

           knob-on

ldoor-open

light-on

micro-open

sdoor-open

Metaworld:

hammer

bin-pick

assembly

drawer-open

button-press

Adroit:

ball-relocate

pen

Real World Robot Dataset:

We collect a Fanuc Manipulation dataset for robot behavior cloning, including 17 manipulation tasks and 450 expert demonstrations, as shown in Fig. 5. We employ a FANUC LRMate 200iD/7L robotic arm outfitted with an SMC gripper. The robot is manipulated using operational space velocity control. Demonstrations were collected via a human operator interface, which utilized a keyboard to control the robot’s end effector. We established a set of seven key bindings to facilitate 3D translational, 3D rotational, and 1D gripper actions for robot control. During these demonstrations, we recorded camera images, robot joint angles, velocities, and expert actions.

Task Category Distibution:

Task Demo:

Disassembly

Pick and Place

Stack cups

Close laptop

Real World Robot Experiment for Our Model:

open drawer

close laptop

push cube to blue point

push box over blue line

Experiment Result