TarGF: Learning Target Gradient Field for Object Rearrangement
Mingdong Wu*, Fangwei Zhong*, Yulong Xia, Hao Dong
CFCS, School of CS, Peking University,
BIGAI
Arxiv/Github

Abstract

Object Arrangement is to move objects from shuffled layouts to a normative target distribution, e.g., tidy rooms. However, it remains challenging for AI agents, as it is hard to describe the target distribution (goal state) for reward engineering or collect expert trajectories as demonstrations. Hence, it is infeasible to directly employ reinforcement learning or imitation learning algorithms to address the task. This paper aims to search for a policy only with a set of examples from a target distribution instead of a handcrafted reward function. We employ the score-matching objective to train a target gradient field (TarGF), indicating a direction on each object to increase the likelihood of the target distribution. For object arrangement, the TarGF can be used in two ways: 1) For model-based planning, we can cast the target gradient into a reference control, and output actions with a distributed path planner; 2) For model-free reinforcement learning, the TarGF is not only used for estimating the delta likelihood as a reward but also provides suggested actions in residual policy learning.  Experimental results in ball arrangement and room arrangement demonstrate that our method significantly outperforms the state-of-the-art methods in the quality of the terminal state, the efficiency of the control process, and scalability.


SlidesLive

Results of Ours (ORCA) in Ball Rearrangement

We set t=0.01 for the gradient-based action.

Circling
horizon=200

Clustering
horizon=200

Circling + Clustering
horizon=300

Results of Ours (SAC) in Ball Rearrangement
We set t=0.01 for the gradient-based action.

Circling
Horizon=200

Clustering
Horizon=200

Circling + Clustering
Horizon=300

Results of Ours (SAC) in Room Rearrangement
We set t=0.01 for the gradient-based action. Horizon=250.

Acknowledgements

This work was supported by National Natural Science Foundation of China —Youth Science Fund (No.62006006): Learning Visual Prediction of Interactive Physical Scenes using Unlabelled Videos.  Fangwei Zhong was supported by China National Post-doctoral Program for Innovative Talents (Grant No. BX2021008). We also thank Tianhao Wu and Yali Du for insightful discussions. 

Citation

@article{wu2022targf,

  title={TarGF: Learning Target Gradient Field to Rearrange Objects without Explicit Goal Specification},

  author={Wu, Mingdong and Zhong, Fangwei and Xia, Yulong and Dong, Hao},

  journal={Advances in Neural Information Processing Systems},

  volume={35},

  pages={31986--31999},

  year={2022}

}