Discriminative Orderlet Mining For Real-time Recognition of Human-Object Interaction
Gang YU, Zicheng Liu, Junsong Yuan

Abstract

This paper presents a novel visual representation, called orderlets, for real-time human action recognition with depth sensors. An orderlet is a middle level feature that captures the ordinal pattern among a group of low level features. For skeletons, an orderlet captures specific spatial relationship among a group of joints. For a depth map, an orderlet characterizes a comparative relationship of the shape information among a group of subregions. The orderlet representation has two nice properties. First, it is insensitive to small noise since an orderlet only depends on the comparative relationship among individual features.  Second, it is a frame-level representation thus suitable for real-time online action recognition.  Experimental results demonstrate its superior performance on online action recognition and cross-environment action recognition.


Video Demo:




Dataset:

 

Reference:

Gang Yu, Zicheng Liu, Junsong Yuan

Discriminative Orderlet Mining For Real-time Recognition of Human-Object Interaction

Asian Conference on Computer Vision (ACCV) 2014