Visual Tracking Using Attention-Modulated Disintegration and Integration


In this paper, we present a novel attention-modulated visual tracking algorithm that decomposes an object into into multiple cognitive units, and trains multiple elementary trackers in order to modulate the distribution of attention according to various feature and kernel types. In the integration stage it recombines the units to memorize and recognize the target object effectively. With respect to the elementary trackers, we present a novel attentional feature-based correlation filter (AtCF) that focuses on distinctive attentional features. The effectiveness of the proposed algorithm is validated through experimental comparison with state-of-the-art methods on widely-used tracking benchmark datasets.

Fig 1. Framework for the proposed tracker

Fig 2. Tracking performance obtained by OOTB2013 dataset


11/29, 2017 Github open

05/23, 2017 Benchmark results for TPAMI2015 dataset and VOT2014 were uploaded.

06/24, 2016 SCT4 program was uploaded.

06/23, 2016 Paper & bibtex were uploaded.

06/20, 2016 SCT4 was submitted to VOT challenge 2016.

04/04, 2016 Project page was built.

03/10, 2016 The conference paper was accepted in CVPR2016.


Visual Tracking Using Attention-Modulated Disintegration and Integration

Jongwon Choi, Hyung Jin Chang, Jiyeoup Jeong, Yiannis Demiris, and Jin Young Choi

IEEE Conference on Computer Vision and Pattern Recognition 2016 (CVPR2016), Accepted. [Poster]

[pdf] [supplementary] [code] [result (CVPR2013)] [result2 (TPAMI2015, VOT2014)] [bibtex]

[Github Link]

If you have questions, please contact