Category-Independent Articulated Object Tracking with Factor Graphs

Nick Heppert, Toki Migimatsu, Brent Yi, Claire Chen, Jeannette Bohg

Abstract

Robots deployed in human-centric environments may need to manipulate a diverse range of articulated objects, such as doors, dishwashers, and cabinets. Articulated objects often come with unexpected articulation mechanisms that are inconsistent with categorical priors: for example, a drawer might rotate about a hinge joint instead of sliding open. We propose a category-independent framework for predicting the articulation models of unknown objects from sequences of RGB-D images. The prediction is performed by a two-step process: first, a visual perception module tracks object part poses from raw images, and second, a factor graph takes these poses and infers the articulation model including the current configuration between the parts as a 6D twist. We also propose a manipulation-oriented metric to evaluate predicted joint twists in terms of how well a compliant robot controller would be able to manipulate the articulated object given the predicted twist. We demonstrate that our visual perception and factor graph modules outperform baselines on simulated data and show the applicability of our factor graph on real world data.

Visual Results

Short Video

Long Video

BibTex

@inproceedings{heppert2022category,

author = {Nick Heppert and

Toki Migimatsu and

Brent Yi and

Claire Chen and

Jeannette Bohg},

title = {Category-Independent Articulated Object Tracking with Factor Graphs},

booktitle = {{IEEE/RSJ} International Conference on Intelligent Robots and Systems,

{IROS} 2022, Kyoto, Japan, October 23-27, 2022},

pages = {3800--3807},

publisher = {{IEEE}},

year = {2022},

doi = {10.1109/IROS47612.2022.9982029},

}