Evaluating Uncertainty-based Failure Detection for Closed-Loop LLM Planners

Zhi Zheng*,1,2, Qian Feng1,2 , Hang li1,2 , Alois Knoll2 , Jianxiang Feng*,1,2

*: Equal Contributions, {zhi.zheng, jianxiang.feng}@tum.de.

1: Agile Robot AG; 2 : Department of Informatics, Technical University of Munich

Abstract

Recently, Large Language Models (LLMs) have witnessed remarkable performance as zero-shot task planners for robotic manipulation tasks. However, the open-loop nature of previous works makes LLM-based planning error-prone and fragile. On the other hand, failure detection approaches for closed-loop planning are often limited by task-specific heuristics or following an unrealistic assumption that the prediction is trustworthy all the time. In this work, we attempt to mitigate these issues by introducing a framework for closed-loop LLM-based planning called KnowLoop, backed by an uncertainty-based Multimodal Large Language Models (MLLMs) failure detector. Specifically, we evaluate three different ways for quantifying the uncertainty of MLLMs, namely token probability, entropy, and self-explained confidence as primary metrics based on three carefully designed representative prompting strategies. With a self-collected dataset including various manipulation tasks and an LLM-based robot system, our experiments demonstrate that token probability and entropy are more reflective compared to self-explained confidence. By setting an appropriate threshold to filter out uncertain predictions and seek human help actively, the accuracy of failure detection can be significantly enhanced. This improvement boosts the effectiveness of closed-loop planning and the overall success rate of tasks.

knowloop_video.mp4

Implementation

Prompting Strategy for MLLM Failure Detector

Direct Prompts

Indirect Prompts

In-House Collected Dataset

Experimental Results: Uncertainty Estimation

Entropy

Token Probability

Self-Explained Confidence

Top row: filtering based on percentage; Bottom row: filtering based actual uncertainty values; For self-explained confidence, filtering based on values is less sensible for its often failure in response.

Experimental Results: Real-Robot Deployment

Bibtex:

@inproceedings{

zheng2024evaluating,

title={Evaluating Uncertainty-based Failure Detection for Closed-Loop {LLM} Planners},

author={Zhi Zheng and Qian Feng and Hang Li and Alois Knoll and Jianxiang Feng},

booktitle={ICRA 2024 Workshop on Back to the Future: Robot Learning Going Probabilistic},

year={2024},

url={https://openreview.net/forum?id=9w1JnHG8Wn}

}