Paper / Code (Coming soon) / OpenReview
Reasoning about failures is crucial for building reliable and trustworthy robotic systems. Prior approaches either treat failure reasoning as a closed-set classification problem or assume access to ample human annotations. Failures in the real world are typically subtle, combinatorial, and difficult to enumerate, whereas rich reasoning labels are expensive to acquire. We address this problem by introducing ARMOR: Adaptive Round-based Multi-task mOdel for Robotic failure detection and reasoning. We formulate detection and reasoning as a multi-task self-refinement process, where the model iteratively predicts detection outcomes and natural language reasoning conditioned on past outputs. During training, ARMOR learns from heterogeneous supervision - large-scale sparse binary labels and small-scale rich reasoning annotations - optimized via a combination of offline and online imitation learning. At inference time, ARMOR generates multiple refinement trajectories and selects the most confident prediction via a self-certainty metric. Experiments across diverse environments show that ARMOR achieves state-of-the-art performance by improving over the previous approaches by up to 30% on failure detection rate and up to 100% in reasoning measured through LLM fuzzy match score, demonstrating robustness to heterogeneous supervision and open-ended reasoning beyond predefined failure modes.
Prior works reduce failure reasoning to closed-set classification of pre-defined modes.
ARMOR performs open-ended, iterative refinement by jointly refining its detection and reasoning predictions. This enables accurate detection with nuanced and human-like reasoning that capture real-world failures beyond fixed categories.
Overview of ARMOR. (a) We consider large amount of binary detection labels and scarce free-form reasoning labels. (b) We adapt a VLM to have multiple prediction heads using the shared representation. (c) We finetune the VLM with both offline BC and to refine its own outputs in an online fashion. (d) At inference, we roll out multiple refinment turns and pick the final answer associated with the lowest entropy.
ARMOR achieves state-of-the-art performance by improving over the previous approaches by up to 30% on failure detection rate and up to 100% in reasoning measured through LLM fuzzy match score, demonstrating open-ended reasoning beyond predefined failure modes.