Unsupervised Random Forest Indexing for Fast Action Search
                    
Gang YU, Junsong Yuan, Zicheng Liu

Abstract

Despite recent successes of small object search in images, the search and localization of actions in crowded videos remains a challenging problem because of (1) the large variations of human actions and (2) the intensive computational cost of searching the video space. To address these challenges, we propose a fast action search and localization method that supports relevance feedback from user. By characterizing videos as spatio-temporal interest points and building a random forest to index and match these points, our query matching is robust and efficient. To enable efficient action localization, we propose a coarse-to-fine subvolume search scheme, which is several orders faster than the existing video branch and bound search. The challenging cross-data search of several actions validates the effectiveness and efficiency of our method.


Method:




Results:



  • Action Detection:

  • Action Retrieval:

  • Interactive Search:

  • Sample Results:


Sample Results on 5-Hour large dataset
   1. Handclapping:

 
2. Boxing:
3. Ballet-spin:




Reference:

Gang Yu, Junsong Yuan, and Zicheng Liu

Unsupervised Random Forest Indexing for Fast Action Search  [pdf]

Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR'11), 2011


Journal Version:

Gang Yu, Junsong Yuan, and Zicheng Liu 

Action Search by Example using Randomized Visual Vocabularies

IEEE Trans. on Image Processing, to appear




Code: