Context-aware Deep Feature Compression for High-speed Visual Tracking

Abstract

We propose a new context-aware correlation filter based tracking framework to achieve both high computational speed and state-of-the-art performance among real-time trackers. The major contribution to the high computational speed lies in the proposed deep feature compression that is achieved by a context-aware scheme utilizing multiple expert auto-encoders; a context in our framework refers to the coarse category of the tracking target according to appearance patterns. In the pre-training phase, one expert auto-encoder is trained per category. In the tracking phase, the best expert auto-encoder is selected for a given target, and only this auto-encoder is used. To achieve high tracking performance with the compressed feature map, we introduce extrinsic denoising processes and a new orthogonality loss term for pre-training and fine-tuning of the expert auto-encoders. We validate the proposed context-aware framework through a number of experiments, where our method achieves a comparable performance to state-of-the-art trackers which cannot run in real-time, while running at a significantly fast speed of over 100 fps.

Fig 1. Framework of TRACA

Fig 2. Computational Efficiency of TRACA in CVPR2013 Dataset

Table 1. Quantitative Results of TRACA in CVPR2013 Dataset

Fig 3. Quantitative Results of TRACA in Benchmark Tracking Datasets


News

06/17, 2018 Poster, Test code, Training code, results, and webcam demo code were uploaded.

03/28, 2018 Project page was built.

03/06, 2018 The conference paper was accepted in CVPR2018. (Poster)


Publication

Context-aware Deep Feature Compression for High-speed Visual Tracking

Jongwon Choi, Hyung Jin Chang, Tobias Fischer, Sangdoo Yun, Kyuewang Lee, Jiyeoup Jeong, Yiannis Demiris, and Jin Young Choi

IEEE Conference on Computer Vision and Pattern Recognition 2018 (CVPR2018), Accepted. [Poster]

[pdf] [supplementary] [poster] [test code] [demo code for webcam] [training code] [results] [bibtex]


If you have questions, please contact jwchoi.pil@gmail.com