R2H: Building Multimodal Navigation Helpers that Respond to Help Requests
Yue Fan, Jing Gu, Kaizhi Zheng, Xin Eric Wang
University of California, Santa Cruz
Abstract:
Intelligent navigation-helper agents are critical as they can navigate users in unknown areas through environmental awareness and conversational ability, serving as potential accessibility tools for individuals with disabilities. In this work, we first introduce a novel benchmark, Respond to Help Requests (R2H), to promote the development of multi-modal navigation helpers capable of responding to requests for help, utilizing existing dialog-based embodied datasets. R2H mainly includes two tasks: (1) Respond to Dialog History (RDH), which assesses the helper agent's ability to generate informative responses based on a given dialog history, and (2) Respond during Interaction (RdI), which evaluates the effectiveness and efficiency of the response during consistent cooperation with a task performer. Furthermore, we explore two approaches to construct the navigation-helper agent, including fine-tuning a novel task-oriented multi-modal response generation model that can see and respond, named SeeRee, and employing a multi-modal large language model in a zero-shot manner. Analysis of the task and method was conducted based on both automatic benchmarking and human evaluations.
Our R2H benchmark promotes the development of multi-modal navigation helpers.
- What does the helper agent do?
Given navigation inquires, the helper agent provides the task performer with responses grounded in the visual surrounds of the task performer.
- How does the R2H benchmark differ from existing dialog-based embodied benchmarks?
Here we compare our R2H benchmark with other dialog-based embodied benchmarks.
- What are the RDH task and RdI task proposed in R2H benchmark?
Response to Dialog History (RDH): the helper agent outputs responses based on individual task performer's help requests among three different environments.
Response during Interaction (RdI): the helper agent needs to interact with the task performer consistently.
Please cite our paper as below if you use our work.
@misc{fan2023r2h,
title={R2H: Building Multimodal Navigation Helpers that Respond to Help Requests},
author={Yue Fan and Jing Gu and Kaizhi Zheng and Xin Eric Wang},
year={2023},
eprint={2305.14260},
archivePrefix={arXiv},
primaryClass={cs.CL}
}