Iris recognition has been widely accepted as one of the most accurate, stable, and reliable biometric identification technologies. However, traditional iris recognition processes usually impose more constraints to the user cooperation and imaging conditions, which seriously limit the application range of iris recognition. Therefore, there has been much recent work on the non-cooperative or less-constrained iris recognition (either at-a-distance, on-the-move, with minor users' cooperation, within dynamic imaging environments and using mobile devices). Under these circumstances, captured iris regions contain different types of noisy factors, such as occlusions due to eyelids or eyelashes, specular reflections, off-angle, or blur. To make full use of these noisy iris images, efficient and robust iris segmentation have been regarded as the first most important challenge still open to the biometric community, affecting all downstream tasks from normalization to recognition.
In 2007, the Noisy Iris Challenge Evaluation - Part I (NICE.I) was held to benchmark the iris segmentation methods on the Noisy Visible Wavelength Iris Image Database (UBIRIS.v2). And in 2013, the Mobile Iris CHallenge Evaluation – Part I (MICHE I) was held to evaluate the iris segmentation methods developed for iris images captured under uncontrolled settings using mobile devices and built upon a new mobile iris dataset (MICHE-I). The two iris segmentation benchmarking competitions mainly focused on the evaluation of segmentation methods for VIS iris images from caucasian people according to the imaging illumination and ethnic distribution of datasets. Besides, most of submitted methods from NICE.I and MICHE I were developed based on traditional image processing and machine learning methods, rather than deep learning technologies emerged in recent years. Furthermore, in terms of evaluation metrics, only segmentation accuracy of noise-free masks was independently evaluated in the competition, but the localization accuracy of inner and outer boundaries of the iris still relied on indirect evaluation based on the iris recognition performance, which was non-intuitive, time-consuming, and complex (relying on the downstream iris encoding/matching and larger iris datasets).
To investigate these issues, reflect latest developments and attract more interests of researchers in the iris segmentation method, we plan to organize NIR-ISL 2021- a benchmarking challenge held in conjunction with IJCB 2021 focusing on the problem of iris segmentation and localization for NIR iris images from Asian and African people captured in non-cooperative environments. Here we specially split the general iris segmentation task in the conventional iris recognition pipeline into the segmentation of noise-free iris mask and the localization of inner and outer boundaries of the iris, which are narrowly referred to as iris segmentation and iris localization. The main reason for this is to consider that many recent deep learning based iris segmentation methods are only designed to segment the noise-free mask but ignore the localization of iris boundaries. Such an incomplete solution makes it hard to deploy in the conventional iris recognition pipeline. Therefore, the challenge encourages the submission of a complete solution taking the iris segmentation and iris localization into consideration. Of course, the submission of just involving the segmentation of iris mask is also allowed, but the ranking may not be very competitive.
The challenge is open to everyone. We invite research groups working on iris/ocular biometrics, segmentation and localization problems or other related vision tasks to take part in the challenge. A summary paper of NIR-ISL 2021 authored jointly by all participants (achieving competitive performance) will be send for consideration at IJCB 2021. Besides, we will provide some prizes and a certificate sponsored by Tianjin Academy for Intelligent Recognition Technologies(天津中科智能识别产业技术研究院) to the top ranking teams of the challenge as follows:
1st place: $500
2nd place: $300
3rd place: $200