Publications

CLARA: Classifying and Disambiguating User Commands for Reliable Interactive Robotic Agents 

[Project Page

Jeongeun Park, Seungwon Lim, Joonhyung Lee, Sangbeom Park, Minsuk Chang, Youngjae Yu, and Sungjoon Choi

Robotics and Automation Letters 

Abstract

In this paper, we focus on inferring whether the given user command is clear, ambiguous, or infeasible in the context of interactive robotic agents utilizing large language models (LLMs). To tackle this problem, we first present an uncertainty estimation method for LLMs to classify whether the command is certain (i.e., clear) or not (i.e., ambiguous or infeasible). Once the command is classified as uncertain, we further distinguish it between ambiguous or infeasible commands leveraging LLMs with situational aware context in a zero-shot manner. For ambiguous commands, we disambiguate the command by interacting with users via question generation with LLMs. We believe that proper recognition of the given commands could lead to a decrease in malfunction and undesired actions of the robot, enhancing the reliability of interactive robot agents. We present a dataset for robotic situational awareness, consisting pair of high-level commands, scene descriptions, and labels of command type (i.e., clear, ambiguous, or infeasible). We validate the proposed method on the collected dataset, pick-and-place tabletop simulation. Finally, we demonstrate the proposed approach in real-world human-robot interaction experiments, i.e., handover scenarios. 

Experiments

SOCRATES: Text-based Human Search and Approach using a Robot Dog

[arxiv] [Project Page

 Jeongeun Park, Jefferson Silveria, Matthew Pan, Sungjoon Choi

The 2024 IEEE Conference on Robot and Human Interactive Communication (RO-MAN), 2024

Abstract
In this paper, we propose a SOCratic model for Robots Approaching humans based on TExt System (SOCRATES) focusing on the human search and approach based on free-form textual description; the robot first searches for the target user, then the robot proceeds to approach in a human-friendly manner. In particular, textual descriptions are composed of appearance (e.g., "wearing white shirts with black hair") and location clues (e.g., "is a student who works with robots"). We initially present a Human Search Socratic Model that connects large pre-trained models in the language domain to solve the downstream task, which is searching for the target person based on textual descriptions. Then, we propose a hybrid learning-based framework for generating target-cordial robotic motion to approach a person, consisting of a learning-from-demonstration module and a knowledge distillation module. We validate the proposed searching module via simulation using a virtual mobile robot as well as through real-world experiments involving participants and the Boston Dynamics Spot robot. Furthermore, we analyze the properties of the proposed approaching framework with human participants based on the Robotic Social Attributes Scale (RoSAS).

Search Phase

Approach Phase

Zero-shot Active Visual Search (ZAVIS): Intelligent Object Search for Robotic Assistants 

[Paper] [Project Page

Jeongeun Park, Taerim Yoon, Jejoon Hong, Youngjae Yu, Matthew Pan, Sungjoon Choi

The 2023 IEEE Conference of Robotics and Automation (ICRA), 2023

Abstract
In this paper, we focus on the problem of efficiently locating a target object described with free-form text using a mobile robot equipped with vision sensors (e.g., an RGBD camera). Conventional active visual search predefines a set of objects to search for, rendering these techniques restrictive in practice. To provide added flexibility in active visual searching, we propose a system where a user can enter target commands using free-form text; we call this system Zero-shot Active Visual Search (ZAVIS). ZAVIS detects and plans to search for a target object inputted by a user through a semantic grid map represented by static landmarks (e.g., desk or bed). For efficient planning of object search patterns, ZAVIS considers commonsense knowledge-based co-occurrence and predictive uncertainty while deciding which landmarks to visit first. We validate the proposed method with respect to SR (success rate) and SPL (success weighted by path length) in both simulated and real-world environments. The proposed method outperforms previous methods in terms of SPL in simulated scenarios with an average gap of 0.283. We further demonstrate ZAVIS with a Pioneer-3AT robot in real-world studies.

Proposed Method

Proposed Detection Model

Elucidating Robust Learning with Uncertainty-Aware Corruption Pattern Estimation

 [Paper] [Code]

Jeongeun Park, Seungyoun Shin , Sangheum Hwang, Sungjoon Choi

Pattern Recognition, Volume 138, 2023, 109387, ISSN 0031-3203, https://doi.org/10.1016/j.patcog.2023.109387. (IF: 8.544)

Abstract
Robust learning methods aim to learn a clean target distribution from noisy and corrupted training data where a specific corruption pattern is often assumed a priori. Our proposed method can not only successfully learn the clean target distribution from a dirty dataset but also can estimate the underlying noise pattern. To this end, we leverage a mixture-of-experts model that can distinguish two different types of predictive uncertainty, aleatoric and epistemic uncertainty. We show that the ability to estimate the uncertainty plays a significant role in elucidating the corruption patterns as these two objectives are tightly intertwined. We also present a novel validation scheme for evaluating the performance of the corruption pattern estimation. Our proposed method is extensively assessed in terms of both robustness and corruption pattern estimation in the computer vision domain.

Task Explanation

Architecture