Tracking using Multilevel Quantizations

Abstract

Most object tracking methods only exploit a single quantization of an image space: pixels, superpixels, or bounding boxes, each of which has advantages and disadvantages. It is highly unlikely that a common optimal quantization level, suitable for tracking all objects in all environments, exists. We therefore propose a hierarchical appearance representation model for tracking, based on a graphical model that exploits shared information across multiple quantization levels. The tracker aims to find the most possible position of the target by jointly classifying the pixels and superpixels and obtaining the best configuration across all levels. The motion of the bounding box is taken into consideration, while Online Random Forests are used to provide pixel- and superpixel-level quantizations and progressively updated on-the-fly. By appropriately considering the multilevel quantizations, our tracker exhibits  not only excellent performance in non-rigid object deformation handling, but also its robustness to occlusions. A quantitative evaluation is conducted on two benchmark datasets: a non-rigid object tracking dataset (11 sequences) and the CVPR2013 tracking benchmark (50 sequences). Experimental results show that our tracker overcomes various tracking challenges and is superior to a number of other popular tracking methods.


Model

Figure 1. Illustration of the structure of the proposed hierarchical appearance representation model (left) and a practical example (right). In the proposed framework, a node in the Conditional Random Field (CRF) models each pixel, superpixel, and bounding box. At the pixel level, each pixel receives a measurement from a Random Forest and connects to the corresponding superpixel at the middle level. At the superpixel level, each superpixel also obtains a probability output by another Random Forest and suggests the pixels within the same superpixel to share the same label. At the bounding box level, different candidate bounding boxes (green) are considered, and the best position (red) with the best configuration is found. (a) shows the tracking result (in red bounding box) at Frame #226 in the Basketball sequence. (b) displays the superpixelization of the image. (c) and (d) are the output of the pixel-level RF and final labeling result, respectively, while (e) and (f) are the output of the superpixel-level RF and final labeling result.


Publication

"Tracking using Multilevel Quantizations", Zhibin Hong, Chaohui Wang, Xue Mei, Danil Prokhorov, and Dacheng Tao, European Conference on Comptuer Vision(ECCV) 2014, Zurich, Switzerland [pdf] [bibtex]



Datasets

We used two evaluation datasets:

[1] Godec sequences: the videos that Martin Godec et al. used for evaluation of tracking non-rigid objects. The annotations can be found in Stefan Duffner's website.

[2] CVPR 2013 online tracking benchmark: a comprehensive dataset collected by Yi Wu et al. It contains 50 sequences and the results of 29 popular trackers.


Results

The quantitative results on [2] can be downloaded from here, and other results can be found in our paper

Here some qualitative results are shown below. 

Qualitative results on dataset [1]

Mountain Bike

Skiing


Jumping

Volleyball

















Qualitative Results on dataset [2]

Jogging

David





Basketball

Tiger1


















Url Redirecting


ċ
results_MQT_CVPR2013benchmark.zip
(152k)
Zhibin Hong,
Mar 6, 2015, 3:12 AM