AIC MLLM: Autonomous Interactive Correction MLLM for Robust Robotic Manipulation


final.mp4

We introduce AIC MLLM, a framework that utilizes MLLM for correcting SE(3) pose predictions by learning from low-level interaction failures.

Abstract

The ability to reflect on and correct failures is crucial for robotic systems to interact stably with real-life objects. Observing the generalization and reasoning capabilities of Multimodal Large Language Models (MLLMs), previous approaches have aimed to utilize these models to enhance robotic systems accordingly. However, these methods typically focus on high-level planning corrections using an additional MLLM, with limited utilization of failed samples to correct low-level contact poses. To address this gap, we propose an Autonomous Interactive Correction (AIC) MLLM, which makes use of previous low-level interaction experiences to correct SE(3) pose predictions. Specifically, AIC MLLM is initially fine-tuned to acquire both pose prediction and feedback prompt comprehension abilities. We carefully design two types of prompt instructions through interactions with objects: 1) visual masks to highlight unmovable parts for position correction, and 2) textual descriptions to indicate potential directions for rotation correction.During inference, a Feedback Information Extraction module is introduced to recognize the failure cause, allowing AIC MLLM to adaptively correct the pose prediction using the corresponding prompts. To further enhance manipulation stability, we devise a Test Time Adaptation strategy that enables AIC MLLM to better adapt to the current scene configuration. Finally, extensive experiments are conducted in both simulated and real-world environments to evaluate the proposed method. The results demonstrate that our AIC MLLM can efficiently correct failure samples by leveraging interaction experience prompts.

Overview

Correction process of AIC MLLM. Given a failed interaction, we first extract feedback information regarding the object's geometry, then enable the model to reflect and correct both position and rotation estimation, thereby generating a more accurate SE(3) pose.

Training of AIC MLLM

Training of AIC MLLM. We gradually enable the model to predict poses and comprehend both visual and linguistic feedback prompts including object parts and axis information. 

Testing of AIC MLLM

If failure interaction occurs, an FIE module is utilized to extract feedback information from previous failure attempts. This feedback information is integrated into visual and linguistic prompts, which are then fed into the trained model, enabling it to reflect, correct, and generate new action predictions. After inference on each test sample, the model undergoes parameter updates in the TTA module to enhance generalization to the current testing configuration.

Real-world Experiments

Simulation Experiments

We use SAPIEN and the PartNet-Mobility dataset to set up the experiment environment. We employ a Franka Panda on-the-fly suction gripper to execute the end-effector actions. 

We conducted a series of primary comparative experiments and ablation studies, using success rate as the metric. The first four rows show the results without the correction mechanism, while the others show the results with the correction mechanism applied and with four correction opportunities.

The image on the left illustrates the correlation between success rate and correction times. On the right, the correction process is depicted with the aid of simulation 

Tab. 2 presents the ablation study results comparing the Vip-llava method and the GT method. Vipllava uses the Vip-llava model to predict part mobility, while GT uses the ground truth interaction map. This indicates that Vip-llava’s understanding of embodied data is insufficient. Utilizing models with better comprehension of embodied data or incorporating human feedback into our framework could lead to improved results.