Diffusion policies are conditional diffusion models that learn robot action distributions conditioned on the robot and environment state. They have recently shown to outperform both deterministic and alternative action distribution learning formulations. 3D robot policies use 3D scene feature representations aggregated from a single or multiple camera views using sensed depth. They have shown to generalize better than their 2D counterparts across camera viewpoints. We unify these two lines of work and present 3D Diffuser Actor , a neural policy equipped with a novel 3D denoising transformer that fuses information from the 3D visual scene, a language instruction and proprioception to predict the noise in noised 3D robot pose trajectories. 3D Diffuser Actor sets a new state-of-the-art on RLBench with an absolute performance gain of 18.1% over the current SOTA on a multi-view setup and an absolute gain of 13.1% on a single-view setup. On the CALVIN benchmark, it improves over the current SOTA by a 9% relative increase. It also learns to control a robot manipulator in the real world from a handful of demonstrations. Through thorough comparisons with the current SOTA policies and ablations of our model, we show 3D Diffuser Actor ’s design choices dramatically outperform 2D representations, regression and classification objectives, absolute attentions, and holistic non-tokenized 3D scene embeddings.
Top: 3D Diffuser Actor is a denoising diffusion probabilistic model of the robot 3D trajectory conditioned on sensory input, language goals and proprioceptive information (action history). The model is a 3D relative position denoising transformer that featurizes jointly the scene and the current noisy estimate for robot’s future action trajectory through 3D relative-position attentions. 3D Diffuser Actor outputs position and rotation residuals for denoising, as well as the end-effector's state (open/close). Bottom: During inference, 3D Diffuser Actor iteratively denoises the current estimate for the robot's future trajectory, where the initial estimate is initialized from pure noise.
We show 60 samples for the next predicted trajectory on CALVIN.
We show 60 samples for the next predicted keypose in the real-world. 3D Diffuser Actor captures all modes of equivalent behaviors for the different tasks: 3 modes for "picking up the grape", 2 modes for "inserting the peg in a hole" and 2 modes for "finding the mouse".
We test on CALVIN on the setting of zero-shot unseen-scene generalization. All models are trained on environments A, B, C and tested on environment D. 3D Diffuser Actor outperforms prior arts by a large margin, achieving 0.29 more tasks, a 9% relative increase
We test on a multi-task setup of 18 manipulation tasks on RLBench. All models use 4 camera views and 100 expert demonstrations for each task. 3D Diffuser Actor outperforms prior arts by a large margin, achieving 16.0 absolute performance gain on average across tasks.
We train a multi-task 3D Diffuser Actor on 12 manipulation tasks in the real world to control a Franka Emika arm. All models use a single camera view and 15 demonstrations for each task. 3D Diffuser Actor is able to solve multimodal tasks in the real world.