LVDiffusor:

Distilling Functional Rearrangement Priors from Large Models into Diffusor

Yiming Zeng*, Mingdong Wu*, Long Yang, Jiyao Zhang, Hao Ding, Hui Cheng, Hao Dong

Abstract

Object rearrangement, a fundamental challenge in robotics, demands versatile strategies to handle diverse objects, configurations, and functional needs. To achieve this, the AI robot needs to learn functional rearrangement priors in order to specify precise goals that meet the functional requirements. Previous methods typically learn such priors from either laborious human annotations or manually designed heuristics, which limits scalability and generalization. In this work, we propose a novel approach that leverages large models to distill functional rearrangement priors.  Specifically, our approach collects diverse arrangement examples using both LLMs and VLMs, and then distills the examples into a diffusion model.  During test time, the learned diffusion model is conditioned on the initial configuration and guides the positioning of objects to meet functional requirements.  In this manner, we create a handshaking point that combines the strengths of conditional generative models and large models. Extensive experiments on multiple domains, including real-world scenarios, demonstrate the effectiveness of our approach in generating compatible goals for object rearrangement tasks, significantly outperforming baseline methods. 

---

Key Idea: LVDiffusion distills functional arrangement knowledge from large models into a diffusion model to generate well-organized and compatible layouts from everyday cluttered scenes.

Prompt Engineering for LLM

Chain-of-Thoughts (COT) strategy & In-Context Learning Details

Real-world Experiments

Dinner Table (vanilla)

Dinner Table (left-handed)

Office Desk (vanilla)

Office Desk (left-handed)

Qualitative Results