The growing large market of biometrics systems has attracted a number of companies. It is also necessary to match the performance of different sensor or cross sensor environment. Among ocular biometrics, sclera biometrics has gained its popularity since last few years. The major reason for the popularity of this biometrics is because it can increase the biometric applicability of iris biometrics. In order to establish this concept, it is necessary to access the biometric applicability of sclera. Some major challenges that are faced in the sclera biometrics are the segmentation of the sclera portion from the eye image. Various algorithms have been proposed for sclera segmentation in subsequent works of this biometrics, but still, it is a challenge and an open research area. Moreover, for benchmarking sclera segmentation, three competitions were organized namely SSBC 2015, SSRBC 2016 and SSERBC 2017 in conjunction with BTAS 2015, ICB 2016, and IJCB 2017 respectively. Due to the overwhelming successful completion of SSBC 2015, SSRBC 2016 and  SSERBC 2017, we plan to organize this proposed competition to benchmark sclera segmentation in cross sensor environment.  

We will welcome the top ranking participant to join as co-author of the technical report of the competition that will be submitted to ICB 2018.


30/11/2017: Result declared.

18/11/2017: Results announcement delayed to 30th November 2017.

30/10/2017: Registration and code submission date extended until 29th November 2017.

21/08/2017: Regeistrtion open and datset available.

20/08/2017:  Site released

The method of Participation: 

The competition registration can be done by email. If you would like to register and receive the training dataset, please send an email to with the subject line as SSBC 2018, with the following information:  

Name, Affiliation, Email, Phone number and Mailing Address:.

The training dataset will be made available with the ground truth. The naming convention of the eye image will be as E-xxx-y-z-n.jpg, where E signify eye image, xxx signify class in number e.g. 001, 061,142, y signifies left = l or right = r eye, z is gaze angle i.e. l = left, r = right, s = straight or t=top and n is the number of the samples in that particular angle. The naming convention of the mask image will be as M-xxx-y-z-n.jpg, where M signifies mask image and xxx-y-z-n signify the same as for the eye images. The participants need to provide a Matlab script file (segmentation.m or segmentation.p) that can read the images from a directory and it writes the segmented mask in a particular directory. 

Description prizes:  

The winner will be awarded a certificate. 

Benchmark datasets: 

The competition aims to benchmark the sclera segmentation task with a cross sensor datasets. Two different datasets will be employed, the first one is acquired by DSLR while the second one by a mobile camera. The first database consists of 2624 RGB images taken from 82 identities and images were collected from both the eyes of each individual so 164 different eyes. Here for each individual image, four multi-angles (looking straight, left, right and up) are considered and for each angle 4 images are considered. The individual comprises of both male and female and different colours, few of them were wearing contact lens and images were taken in different time in the day. The database contains images with blinking eyes, closed eye and blurred eye images.  High-resolution images are provided in the database (300 dpi resolution and 7500 x 5000 dimensions). All the images are in JPEG format. A NIKON D 800 camera and 28300 lenses were used for image capturing. A ground truth or manual sclera segmentation of this dataset is prepared. For development purposes, a subset of the database, both eye images and ground truth (1 image for each angle of first 30 individual’s i.e. 120) will be provided to the participants. 

This second database consists of 400 RGB images from both eyes of 25 individuals (in other words 50 different eyes). For each eye, 8 sample images were captured. The database contained blurred images and images with blinking eyes. The individuals were comprised of both males and females (12 males and 13 females), of different ages and different skin colours, 2 of them were wearing contact lenses and the images were taken at different times of the day. Variation in image quality (blur, lighting condition etc.) and different acquisition conditions was included intentionally in the database to investigate the performance of the framework in non-ideal scenarios. High-resolution images (3264 × 2448) of 96 dpi are included in the database. All the images are in JPEG format. The images were captured using a mobile camera with an 8-megapixel rear camera.

The proposed algorithms of the participants will be evaluated by the organizer. The evaluation measures will be the precision and recall (recall will consider the prior measure for ranking the algorithms).


Different Phases of the competition


Site opens

20th August 2017

Registration starts

21th August 2017

Test dataset available

21th August 2017

Registration closes

29th November 2017

Algorithm Submission deadline

29th November 2017

Results announcement

30th November 2017



Algorithms No.

Precision in %

Recall in %






Dejan Štepec, Peter Rot, Žiga Emeršič, Peter Peer,  Vitomir Štruc, (Faculty of Computer and Information Science, Ljubljana)




Dejan Štepec, Peter Rot, Žiga Emeršič, Peter Peer,  Vitomir Štruc, (Faculty of Computer and Information Science, Ljubljana)




Chandranath Adak (Griffith University, Australia)




Somenath Chakraborty (Harirampur Government ITI)




Dejan Štepec, Peter Rot, Žiga Emeršič, Peter Peer,  Vitomir Štruc, (Faculty of Computer and Information Science, Ljubljana)








Abhijit Das (Griffith University and UTS, Australia)
Umpada Pal (ISI, Kolkata, India)
Michael Blumenstein (UTS , Australia)
Miguel A. Ferrer (ULPGC, Spain)

For any further information please contact at