Abstract
In this paper we present a novel framework for simultaneous detection of click action and estimation of occluded fingertip positions from egocentric viewed single-depth image sequences. For the detection and estimation, a novel probabilistic inference based on knowledge priors of clicking motion and clicked position is presented. Based on the detection and estimation results, we were able to achieve a fine resolution level of a bare hand-based interaction with virtual objects in egocentric viewpoint. Our contributions include: (i) a rotation and translation invariant finger clicking action and position estimation using the combination of 2D image-based fingertip detection with 3D hand posture estimation in egocentric viewpoint. (ii) a novel spatio-temporal random forest, which performs the detection and estimation efficiently in a single framework. We also present (iii) a selection process utilizing the proposed clicking action detection and position estimation in an arm reachable AR/VR space, which does not require any additional device. Experimental results show that the proposed method delivers promising performance under frequent self-occlusions in the process of selecting objects in AR/VR space whilst wearing an egocentric-depth camera-attached HMD.
Publication
Online demo video
Acknowledgement
This work was supported by the Global Frontier R&D Program on <Human-centered Interaction for Coexistence> funded by the National Research Foundation of Korea grant funded by the Korean Government(MSIP)(2010-0029751). Links Author URL: [Youngkyoon Jang], [Seung-Tak Noh], [Hyung Jin Chang], [Tae-Kyun Kim], [Woontack Woo] Affiliation URL: [Ubiquitous Virtual Reality Lab.] (KAIST), [Imperial Computer Vision and Learning Lab.] (Imperial College London) CHIC URL (Funding group): [Center of Human-centered Interaction for Coexistence]
|