11k Hands

Gender recognition and biometric identification using a large dataset of hand images

Mahmoud Afifi

Welcome to the 11k Hands dataset, a collection of 11,076 hand images (1600 x 1200 pixels) of 190 subjects, of varying ages between 18 - 75 years old. Each subject was asked to open and close his fingers of the right and left hands. Each hand was photographed from both dorsal and palmar sides with a uniform white background and placed approximately in the same distance from the camera. There is a record of metadata associated with each image which includes: (1) the subject ID, (2) gender, (3) age, (4) skin color, and (5) a set of information of the captured hand, i.e. right- or left-hand, hand side (dorsal or palmar), and logical indicators referring to whether the hand image contains accessories, nail polish, or irregularities. The proposed dataset has a large number of hand images with more detailed metadata. The dataset is FREE for reasonable academic fair use.

-----------

The paper

You can read the paper from here.

Github page

You can download our source code from here.

-----------

Citation

If you use the dataset, source code, or trained models provided in this webpage, please cite the following paper:

Mahmoud Afifi, "11K Hands: Gender recognition and biometric identification using a large dataset of hand images." Multimedia Tools and Applications, 2019.

@article{afifi201911kHands,

title = {11K Hands: gender recognition and biometric identification using a large dataset of hand images},

author = {Afifi, Mahmoud},

journal = {Multimedia Tools and Applications},

doi = {10.1007/s11042-019-7424-8},

url = {https://doi.org/10.1007/s11042-019-7424-8},

year={2019}

}

-----------

Statistics

The following Figures show the basic statistics of the proposed dataset.

The first Figure contains the following:

Top: the distribution of skin colors in the dataset is shown. The number of images for each skin color category is written in the top right of the figure. The skin detection process was performed using the skin detection algorithm proposed by by Conaire et al. [1].

Bottom: shows the statistics of 1) the number of subjects, 2) hand images (dorsal- and palmar- sides), 3) hand images with accessories, and 4) hand images with nail polish.

The second Figure shows the age distribution of the subjects and images of the proposed dataset.

[1] Conaire, C. O., O'Connor, N. E., & Smeaton, A. F.. Detector adaptation by maximising agreement between independent data sources. In CVPR'07

-----------

Comparison with other datasets

[1] Sun, Z., Tan, T., Wang, Y., & Li, S. Z. (2005, June). Ordinal palmprint represention for personal identification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on (Vol. 1, pp. 279-284). IEEE.

[2] Yoruk, E., Konukoglu, E., Sankur, B., & Darbon, J. (2006). Shape-based hand recognition. IEEE transactions on image processing, 15(7), 1803-1815.

[3] Yörük, E., Dutağaci, H., & Sankur, B. (2006). Hand biometrics. Image and Vision Computing, 24(5), 483-497.

[4] Hu, R. X., Jia, W., Zhang, D., Gui, J., & Song, L. T. (2012). Hand shape recognition based on coherent distance shape contexts. Pattern Recognition, 45(9), 3348-3359.

[5] Kumar, A. (2008, December). Incorporating cohort information for reliable palmprint authentication. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP'08. Sixth Indian Conference on(pp. 583-590). IEEE.

[6] Ferrer, M. A., Morales, A., Travieso, C. M., & Alonso, J. B. (2007, October). Low cost multimodal biometric identification system based on hand geometry, palm and finger print texture. In Security Technology, 2007 41st Annual IEEE International Carnahan Conference on (pp. 52-58). IEEE.

-----------

Base Model

We present a two-stream CNN for gender classification using the proposed dataset. We then employ this trained two-stream CNN as a feature extractor for both gender classification and biometric identification. The latter is handled using two different approaches. In the first approach, we construct a feature vector from the deep features, extracted from the trained CNN, to train a support vector machine (SVM) classifier. In the second approach, three SVM classifiers are fed by the deep features extracted from different layers of the trained CNN and one SVM classifier is trained using the local binary pattern (LBP) features in order to improve the correct identification rate obtained by summing up the classification scores of all SVM classifiers.

You can download the trained models and classifiers from tables below.

-----------

Download

  • Hand images: download (632 MB)

  • Metadata: (*.mat) download (199 KB) | (*.txt) download (651 KB) | (*.csv) download (759 KB)

  • Gender classification source code‡ : download (271 KB) | ReadMe file (4 KB)

  • Trained CNNs: download from the table below.

  • Biometric identification source code*‡ : download (76,516 KB) | ReadMe file (4 KB)

  • Trained SVM classifiers: download from the table below.

  • Training-testing sets (10 folds): download from the tables below.

  • Skin mask images obtained using skin detection technique presented by Conaire et al. [1]: download (1.65 GB)

*required the trained CNNs that are found below.

‡password is provided in the ReadMe file.

[1] Conaire, C. O., O'Connor, N. E., & Smeaton, A. F.. Detector adaptation by maximising agreement between independent data sources. In CVPR'07


Github link is here which also includes an example of how to re-report our results for biometric identification.

-----------

Gender classification

As we have a bias towards the number of female hand images (see the statistics above), we use 1,000 dorsal hand images of each gender for training and 500 dorsal hand images of each gender for testing. The images are picked randomly such that the training and testing sets are disjoint sets of subjects, meaning if the subject's hand images appear in the training data, this subject is excluded from the testing data and vice-versa. The same is done for palmar side hand images. For each side, we repeat the experiment 10 times to avoid overfitting problems and consider the average of accuracy as the evaluation metric.

For comparisons, we have train different image classification methods using the 10 sets of training and testing pairs. The methods are: (1) bag of visual words (BoW), (2) fisher vector, (3) Alexnet (CNN), (4) VGG-16 (CNN), (5) VGG-19 (CNN), and (6) Googlenet (CNN). For the first image classification frameworks (BoW and FV), we have used three different feature descriptors, which are: (1) SIFT, (2) C-SIFT, and (3) rgSIFT. For further comparisons, we recommend to use the same evaluation criterion. To download the 10 sets of training and testing pairs that have been used in our experiments, see the following Table:

Each set contains the following files:

    • g_imgs_training_d.txt: image filenames for training (dorsal-side)

    • g_imgs_training_p.txt: image filenames for training (palmar-side)

    • g_imgs_testing_d.txt: image filenames for testing (dorsal-side)

    • g_imgs_testing_p.txt: image filenames for testing (palmar-side)

    • g_training_d.txt: the true gender of each corresponding image filename in g_imgs_training_d.txt

    • g_training_p.txt: the true gender of each corresponding image filename in g_imgs_training_p.txt

    • g_testing_d.txt: the true gender of each corresponding image filename in g_imgs_testing_d.txt

    • g_testing_p.txt: the true gender of each corresponding image filename in g_imgs_testing_p.txt

You can use this Matlab code to extract the images used in each experiments. The code generates 10 directories, each one contains the training and testing sets for each gender. Then you can use the imageDatastore function to load them (see CNN_training.m source code).

Trained CNN models, SVM classifiers, and results

If the Matlab Neural Network Toolbox Model for Network support package is not installed, then the function provides a link to the required support package in the Add-On Explorer. To install the support package, click the link, and then click Install. Check that the installation is successful by typing the model name (e.g. alexnet, vgg16, vgg19, and googlenet) at the command line.

*requires Matlab 2016 or higher.

**requires Matlab 2017b or higher.

+trained SVM classifiers using our CNN model as a feature extractor, as described in the paper. The SVM classifiers were trained using the concatenated feature vector in which features from fc9 of the 1st stream, fc10 of the 2nd stream and the fusion fully connected layer are concatenated into one vector. The LBP/SVM classifiers were trained using the concatenated feature vector in which LBP features and features from fc9 of the 1st stream, fc10 of the 2nd stream and the fusion fully connected layer are concatenated into one vector.

-----------

Biometric identification

For biometric identification, we work with different training and testing sets. We use 10 hand images for training and 4 hand images for testing of each hand side (palmar or dorsal) of 80, 100, and 120 subjects. We repeat the experiment 10 times, with the subjects and images picked randomly each time. We adopt the average identification accuracy as the evaluation metric. For further comparisons, we recommend to use the same evaluation criterion. To download the 10 sets of training and testing pairs that have been used in our experiments, see the following Table:

Each set contains the following files:

    • id_imgs_training_d_S.txt: image filenames for training (dorsal-side)

    • id_imgs_training_p_S.txt: image filenames for training (palmar-side)

    • id_imgs_testing_d_S.txt: image filenames for testing (dorsal-side)

    • id_imgs_testing_p_S.txt: image filenames for testing (palmar-side)

    • id_training_d_S.txt: the ID of each corresponding image filename in id_imgs_training_d_S.txt

    • id_training_p_S.txt: the ID of each corresponding image filename in id_imgs_training_p_S.txt

    • id_testing_d_S.txt: the true ID of each corresponding image filename in id_imgs_testing_d_S.txt

    • id_testing_d_S.txt: the true ID of each corresponding image filename in id_imgs_testing_p_S.txt

    • S: is the number of subjects: 80, 100, and 120. Read the paper for more details.

You can use this Matlab code to extract the images used in each experiments. The code generates 10 directories, each one contains the training and testing sets for each set of subjects. Each filename contains the ID of the subject. For example, 0000000_Hand_0000055.jpg means this image for subject number 0000000, the rest of the file name is the original image name. You can use this Matlab code to load all image filenames and extract the corresponding IDs.

Trained SVM Classifiers and results

*trained SVM classifiers using our CNN model as a feature extractor, as described in the paper. Each .mat file contains a Classifier object where:

  • Classifier.low: is the SVM classifier trained using the features extracted from the smoothed version of the input image. These CNN-features are obtained from f9 of the 1st stream.

  • Classifier.high: is the SVM classifier trained using the features extracted from the detail layer of the input image. These CNN-features are obtained from f10 of the 2nd stream.

  • Classifier.fusion: is the SVM classifier trained using the features extracted from the fusion layer of our CNN.

  • Classifier.lbp: is the SVM classifier trained using the LBP features.

  • Classifier.all: is the SVM classifier trained using the concatenated feature vector in which features from fc9 of the 1st stream, fc10 of the 2nd stream and the fusion fully connected layer are concatenated into one vector.

-----------

Contact us

Questions and comments can be sent to:

mafifi[at]eecs[dot]yorku[dot]ca or m.afifi[at]aun[dot]edu[dot]eg

-----------

©2019 This page contains files that could be protected by copyright.

They are provided here for reasonable academic fair use.