DEF-oriCORN: efficient 3D scene understanding for robust language-directed manipulation without demonstrations

Dongwon Son, Sanghyeon Son, Jaehyung Kim, and Beomjoon Kim

Korea Advanced Institute of Science and Technology (KAIST), Republic of Korea

{dongwon.son, ssh98son, kimjaehyung, beomjoon.kim}@.kaist.ac.kr

Abstract

We present DEF-oriCORN, a framework for language-directed manipulation tasks. By leveraging a novel object-based scene representation and diffusion-model-based state estimation algorithm, our framework enables efficient and robust manipulation planning in response to verbal commands, even in tightly packed environments with sparse camera views without any demonstrations. Unlike traditional representations, our representation affords efficient collision checking and language grounding. Compared to state-of-the-art baselines, our framework achieves superior estimation and motion planning performance from sparse RGB images and zero-shot generalizes to real-world scenarios with diverse materials, including transparent and reflective objects, despite being trained exclusively in simulation.

 

Real-time language-directed pick-and-place with DEF-oriCORN

Motion planning with ShaPO and DEF-oriCORN