A Simple Approach for Visual Room Rearrangement: 

3D Mapping and Semantic Search


Brandon Trabucco ¹ ,    Gunnar Sigurdsson ² ,    Robinson Piramuthu ² ,
Gaurav S. Sukhatme ² ³ ,    Ruslan Salakhutdinov ¹

¹ Carnegie Mellon University ,        ² Amazon Alexa AI ,        ³ University of Southern California

Paper: ICLR 2023        Code: GitHub

Abstract

Physically rearranging objects is an important capability for embodied agents. Visual room rearrangement evaluates an agent's ability to rearrange objects in a room to a desired goal based solely on visual input. We propose a simple yet effective method for this problem: (1) search for and map which objects need to be rearranged, and (2) rearrange each object until the task is complete. Our approach consists of an off-the-shelf semantic segmentation model, voxel-based semantic map, and semantic search policy to efficiently find objects that need to be rearranged. On the AI2-THOR Rearrangement Challenge, our method improves on current state-of-the-art end-to-end reinforcement learning-based methods that learn visual rearrangement policies from 0.53% correct rearrangement to 15.11%, using only 2.7% as many samples from the environment.