Gesture-Informed Robot Assistance via Foundation Models

Li-Heng Lin, Yuchen Cui, Yilun Hao, Fei Xia, Dorsa Sadigh


Paper   Tweet  Code (coming soon!)   

LLM Prompts   Dataset(GestureInstruct)  


Overview Video

GIRAF-CoRL23-v3.mp4

Abstract

Gestures serve as a fundamental and significant mode of non-verbal communication among humans. Deictic gestures (such as pointing towards an object), in particular, offer valuable means of efficiently expressing intent in situations where language is inaccessible, restricted, or highly specialized. As a result, it is essential for robots to comprehend gestures in order to infer human intentions and establish more effective coordination with them.

Prior work often rely on a rigid hand-coded library of gestures along with their meanings. However, interpretation of gestures is often context-dependent, requiring more flexibility and common-sense reasoning. In this work, we propose a framework, GIRAF, for more flexibly interpreting gesture and language instructions by leveraging the power of large language models. 

Our framework is able to accurately infer human intent and contextualize the meaning of their gestures for more effective human-robot collaboration. We instantiate the framework for two table-top manipulation tasks and demonstrate that it is both effective and preferred by users. 

We further demonstrate GIRAF’s ability on reasoning about diverse types of gestures by curating a GestureInstruct dataset consisting of 36 different task scenarios. GIRAF achieved 81% success rate on finding the correct plan for tasks in GestureInstruct.

System Diagram

GIRAF_system_diagram.mp4

Experiments

Pointing at one object

open_drawer.MOV
fetch_tool.MOV

Pointing at multiple objects

open_two_drawers_1.mov
open_two_drawers_2.MOV
open_three_drawers.MOV

Pointing at one object + placing position

fetch_and_place_tool_1.MOV
fetch_and_place_tool_2.MOV

Pointing at a specific grasp position

specific_grasp_point_1.MOV
specific_grasp_point_2.MOV

Long-horizon tasks with diverse gestures

long_horizon_1.MOV
long_horizon_2.MOV

User Study Results

To test whether GIRAF improves over the langauge-only baseline CaP [1], we conduct a user study using the two tasks shown in the "pointing at one object" experiment above. The results are shown below, and we also show comparisons on some specific cases here.

References

[1] Jacky Liang and Wenlong Huang and Fei Xia and Peng Xu and Karol Hausman and Brian Ichter and Pete Florence and Andy Zeng. "Code as Policies: Language Model Programs for Embodied Control." arXiv preprint arXiv:2209.07753 (2022).

Citation

@inproceedings{lin2023giraf,

 title={Gesture-Informed Robot Assistance via Foundation Model},

 author={Lin, Li-Heng and Cui, Yuchen and Hao, Yilun and Xia, Fei and Sadigh, Dorsa},

 booktitle={Proceedings of the 7th Conference on Robot Learning (CoRL)},

 year={2023}

}