MV-MWM can solve visual robotic manipulation tasks with random viewpoints even in the absence of camera calibration.
Viewpoint 1
Viewpoint 2
Viewpoint 3
Surprisingly, MV-MWM can solve visual robotic manipulation tasks with hand-held cameras.
Rotation
Shake
Translation
Zoom
Visual robotic manipulation research and applications often use multiple cameras, or views, to better perceive the world. How else can we utilize the richness of multi-view data? In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation. Specifically, we train a multi-view masked autoencoder which reconstructs pixels of randomly masked viewpoints and then learn a world model operating on the representations from the autoencoder. We demonstrate the effectiveness of our method in a range of scenarios, including multi-view control and single-view control with auxiliary cameras for representation learning. We also show that the multi-view masked autoencoder trained with multiple randomized viewpoints enables training a policy with strong viewpoint randomization and transferring the policy to solve real-robot tasks without camera calibration and an adaptation procedure.Â
Given multi-view data from multiple cameras or multiple randomized viewpoints, we mask viewpoints from video frames at random and train a multi-view masked autoencoder to reconstruct pixels of both masked and unmasked viewpoints. We then learn a world model upon frozen autoencoder representations to solve tasks from various robotic manipulation setups, including a multi-view control, a single-view control, and a viewpoint-robust control in both simulation and real-world.
(a) We show that the proposed view-masking largely outperforms uniform masking.
(b) We show that a combination of view-masking and video autoencoding is synergistic because video autoencoding makes it easy to reconstruct masked viewpoints by giving access to unmasked frames of the same view.
(c) We show that high masking ratio is crucial for performance.