ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking

Tara Sadjadpour1, Jie Li2, Rares Ambrus3, Jeannette Bohg1

1Stanford University, 2NVIDIA, 3Toyota Research Institute

Abstract

Multi-object tracking (MOT) is a cornerstone capability of any robotic system. The quality of tracking is largely dependent on the quality of the detector used. In many applications, such as autonomous vehicles, it is preferable to over-detect objects to avoid catastrophic outcomes due to missed detections. As a result, current state-of-the-art 3D detectors produce high rates of false-positives to ensure a low number of false-negatives. This can negatively affect tracking by making data association and track lifecycle management more challenging. Additionally, occasional false-negative detections due to difficult scenarios like occlusions can harm tracking performance. To address these issues in a unified framework, we propose to learn shape and spatio-temporal affinities between tracks and detections in consecutive frames. Our affinity provides a probabilistic matching that leads to robust data association, track lifecycle management, false-positive elimination, false-negative propagation, and sequential track confidence refinement. Though past 3D MOT approaches address a subset of components in this problem domain, we offer the first self-contained framework that addresses all these aspects of the 3D MOT problem. We quantitatively evaluate our method on the nuScenes tracking benchmark where we achieve 1st place amongst LiDAR-only trackers using CenterPoint detections. Our method estimates accurate and precise tracks, while decreasing the overall number of false-positive and false-negative tracks and increasing the number of true-positive tracks. Unlike past works, we analyze our performance with 5 metrics, including AMOTA, the most common tracking accuracy metric. Thereby, we give a comprehensive overview of our approach to indicate how our tracking framework may impact the ultimate goal of an autonomous mobile agent. We also present ablative experiments, as well as qualitative results that demonstrate our framework's capabilities in complex scenarios.

Demo Video

Citation

If you found this work interesting, please consider citing:

@article{sadjadpour2023shasta,  title={Shasta: Modeling shape and spatio-temporal affinities for 3d multi-object tracking},  author={Sadjadpour, Tara and Li, Jie and Ambrus, Rares and Bohg, Jeannette},  journal={IEEE Robotics and Automation Letters},  year={2023},  publisher={IEEE}}

Extension

If you enjoy this work and are interested in multi-modal perception with camera-LiDAR fusion, please also see our follow-up work ShaSTA-Fuse: Camera-LiDAR Sensor Fusion to Model Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking.

Research Supported by TRI