Describe and Attend to Track with Spatio-Temporal Graph


Xiao Wang#1, Chenglong Li#1, Tianzhu Zhang#2,3, Bin Luo#1, Jin Tang#1

#1 School of Computer Science and Technology, Anhui University, Hefei, Anhui Province, China

#2 National Laboratory of Pattern Recognition, Institute of Automation, CAS

#3 University of Chinese Academy of Sciences


Abstract

The tracking-by-detection framework requires a set of positive and negative training samples to learn robust tracking models for precise localization of target objects. However, existing tracking models mostly treat different samples independently while ignores the relationship information among them. In this paper, we propose a novel structure-aware deep neural network to overcome such limitations. In particular, we construct a graph to represent the pairwise relationships among training samples, and additionally take the natural language as the supervised information to learn both feature representations and classifiers robustly. To refine the states of the target and re-track the target when it is back to view from heavy occlusion and out of view, we elaborately design a novel subnetwork to learn the target-driven visual attentions from the guidance of both visual and natural language cues. Extensive experiments on five tracking benchmark datasets validated the effectiveness of our proposed method.

Motivation and Contributions of This Paper:

  • How to learn a more robust deep feature representation by considering the correlations between extracted proposals?

  • How to obtain high-quality global proposals for visual tracking?


The Contributions can be listed as followings:

  1. We propose an effective approach to handle the challenges of significant appearance changes, heavy occlusion and out of view in visual tracking. Extensive experiments on five tracking benchmarks against some recent and state-of-the-art trackers demonstrate that our proposed tracker is more robust to aforementioned challenging factors.

  2. We propose a novel structure-aware deep neural network to make best use of the structures between training sample pairs and thus enhance the discriminative ability of feature representations. To make feature representations more discriminative, we introduce the natural language of target objects to assist visual feature learning via a triplet loss function.

  3. We elaborately design a novel global proposal generation network to the target-driven visual attentions from the guidance of both visual and natural language cues. Benefit from the global proposals, our tracker is able to re-track the target objects that are lost caused by the challenges of heavy occlusion and out of view.

Network Architecture

Demo Videos:

The red BBox is our tracking result, the compared trackers including: CREST, MDNet, ECO, CCOT, SRDCF, CSRDCF, SINT++.

cvpr2019_Baby_ce.avi
cvpr2019_CarChase_ce1.avi
cvpr2019_Surf_ce1.avi
cvpr2019_MotorRolling.avi
cvpr2019_Jump.avi
cvpr2019_CarScale.avi

Visualization:

Tracking Results on Public Tracking Benchmarks: