Physically Grounded Vision-Language Models for Robotic Manipulation

Abstract: Recent advances in vision-language models (VLMs) have led to improved performance on tasks such as visual question answering and image captioning. Consequently, these models are now well-positioned to reason about the physical world, particularly within domains such as robotic manipulation. However, current VLMs are limited in their understanding of the physical concepts (e.g., material, fragility) of common objects, which restricts their usefulness for robotic manipulation tasks that involve interaction and physical reasoning about such objects. To address this limitation, we propose PhysObjects, an object-centric dataset of 36.9K crowd-sourced and 417K automated physical concept annotations of common household objects. We demonstrate that fine-tuning a VLM on PhysObjects improves its understanding of physical object concepts, by capturing human priors of these concepts from visual appearance. We incorporate this physically-grounded VLM in an interactive framework with a large language model-based robotic planner, and show improved planning performance on tasks that require reasoning about physical object concepts, compared to baselines that do not leverage physically-grounded VLMs. We additionally illustrate the benefits of our physically-grounded VLM on a real robot, where it improves task success rate.

PhysObjects Dataset: https://drive.google.com/file/d/1ThZ7p_5BnMboK_QE13m1fPKa4WGdRcfC/view?usp=sharing

PG-InstructBLIP Model Weights: To be released upon publication.

Real Scene Planning Evaluation

Click on a scene image to go to the planning results for a task in that scene. 

Scene 1:  Countertop

Scene 2: Art Table

Scene 3: Floor

Scene 4: Kitchen A

Scene 5: Kitchen B

Scene 6: Salad Bar

Scene 7: Living Room

Scene 8: Shelf

Real Robot Evaluation

Click on a scene image to go to videos of all tasks for that scene. 

Scene 1

Scene 2