Apr. 12, 2018: Check out our new dataset HMD Controller Dataset here: https://sites.google.com/view/hmd-controller-dataset!
Mar. 28, 2018: The full dataset (v1) has been uploaded and ready to use! It contains ~360,000 image pairs for gesture recognition. To get our dataset, please follow instructions in the Download section below.
Jan. 20, 2018: The first half of our dataset (v0.5) has been uploaded and ready for use! This first dataset drop contains 169,349 image pairs with ground truth gesture labels and bounding boxes.
We present a comprehensive image dataset of hand gestures for AR and VR head-mounted display systems. Images are captured from a stereo monochrome fisheye pair mounted in front of a typical HMD system. For each image pair, we provide gesture class label and bounding box location of hands. Here are some sample images:
Our dataset contains >400,000 images from 31 participants and 30 different environment. Participants held a smaller controller in their hands during the data collection. The dataset contains 8 gesture classes, which are:
Each image is annotated with gesture name and bounding box (coordinates normalized between 0 and 1). Annotations for images in the same folder are saved in labels.csv
with the following format: image_name, gesture_label, bb_x, bb_y, bb_w, bb_h
.
Images often contain cluttered background and very challenging lighting conditions.
If you would like to download the dataset for research, please complete this short form to request access.
If you use the HMD Gesture dataset for research or publication, please cite:
R. Pandey, M. White, P. Pidlypenskyi, X. Wang, C. Kaeser-Chen
Real-time Egocentric Gesture Recognition on Mobile Head Mounted Displays
In: NIPS Workshop on Machine Learning on the Phone and other Consumer Devices, 2017
[arXiv]
For questions on the dataset, please contact us at hmd-gesture-discuss@googlegroups.com. You may need to first join the group first through https://groups.google.com/d/forum/hmd-gesture-discuss (click on "Apply for membership").