Multimodal Sensor Fusion with Differentiable Filters
Michelle A. Lee*, Brent Yi*, Roberto Martín-Martín, Silvio Savarese, Jeannette Bohg
Michelle A. Lee*, Brent Yi*, Roberto Martín-Martín, Silvio Savarese, Jeannette Bohg
Leveraging multimodal information with recursive Bayesian filters improves performance and robustness of state estimation, as recursive filters can combine different modalities according to their uncertainties. Prior work has studied how to optimally fuse different sensor modalities with analytical state estimation algorithms. However, deriving the dynamics and measurement models along with their noise profile can be difficult or lead to intractable models. Differentiable filters provide a way to learn these models end-to-end while retaining the algorithmic structure of recursive filters. This can be especially helpful when working with sensor modalities that are high dimensional and have very different characteristics. In contact-rich manipulation, we want to combine visual sensing (which gives us global information) with tactile sensing (which gives us local information). In this paper, we study new differentiable filtering architectures to fuse heterogeneous sensor information. As case studies, we evaluate three tasks: two in planar pushing (simulated and real) and one in manipulating a kinematically constrained door (simulated). In extensive evaluations, we find that differentiable filters that leverage crossmodal sensor information reach comparable accuracies to unstructured LSTM models, while presenting interpretability benefits that may be important for safety-critical systems.
IROS 2020 Conference Paper: https://arxiv.org/abs/2010.13021
PyTorch Filtering Library: https://github.com/stanford-iprl-lab/torchfilter
Paper Source Code: https://github.com/brentyi/multimodalfilter
Contact: michellelee {at} cs {dot} stanford {dot} edu for more information
*Equal contribution. All authors are with Stanford Artificial Intelligence Lab (SAIL), Stanford University.
Research supported by: