Detailed Evaluation Criteria
The evaluation of the Latent in the Wild Fingerprint Recognition Competition was conducted using several key performance indicators to ensure a comprehensive assessment of the submitted algorithms. The following metrics were used to evaluate the effectiveness of the fingerprint recognition systems: (1) the Failure to Enroll Rate (FTER), (2) False Match Rate (FMR) and False Non-Match Rate (FNMR), (3) the Equal Error Rate (EER), (4) the Receiver Operating Characteristic (ROC) curve, (5) the Area Under the ROC Curve (AUC), and (6) Computational Cost (CC).
Competition Participants
The competition aimed to attract participants from both academia and industry, with a wide geographic and professional diversity. The call for participation was disseminated through multiple channels, including the International Joint Conference on Biometrics 2024 website, the competition's official website, various social media platforms, and private email lists. This outreach successfully attracted six registered teams from diverse backgrounds.
Of the six registered teams, three submitted valid solutions. These three teams represented three different countries, highlighting the international scope of the competition. Among the participating teams, one had an academic affiliation while the other two were from industry. Notably, one team chose to remain anonymous, and one team submitted two versions of their solution. In total, three valid solutions were received and evaluated. A summary of the participating teams is presented in Table 1 below.
Table 1. A summary of the valid submitted solutions, participating team members, affiliations, and type of institution.
Results
FTER
The FTER for baseline methods and submitted solutions are in the Table 2. The submitted methods demonstrate significant improvements over baseline methods in this regard. MarkIDNet stands out with an FTER of 0%, successfully enrolling all samples, indicating exceptional robustness. LatentMinuComp_v0 also performs impressively with an FTER of 2.7%, showing high reliability. VeriFinger_v13.1 shows a notable improvement over its predecessor, VeriFinger_v12.3, with its FTER reduced from 40.8% to 6.2%, indicating substantial advancements.
In comparison, the baseline methods exhibit higher FTERs, highlighting their challenges in processing diverse samples. MCC and VeriFinger_v12.3 have high FTERs of 36.7% and 40.8%, respectively, indicating significant difficulties in enrolling samples. MSU-AFIS, with an FTER of 7.7%, performs better but still shows room for improvement. Overall, the submitted methods, especially MarkIDNet and LatentMinuComp_v0, demonstrate superior robustness and reliability, significantly outperforming the baseline methods and underscoring advancements in fingerprint recognition technology.
Table 2. FTER for submitted solutions and baseline method.
System error rates
VeriFinger_v13.1 is the top-performing method, achieving rank 1. It has the highest AUC (0.854), indicating superior accuracy and reliability in distinguishing between genuine and impostor matches. VeriFinger_v13.1's low EER (0.228) further demonstrates its effectiveness in minimizing errors. The method also shows favorable FMR values (FMR100 = 0.308, FMR20 = 0.291, FMR10 = 0.275), reflecting its ability to handle false matches effectively. LatentMinuComp_v0, ranked 2, also demonstrates strong performance with an AUC of 0.791. Although its AUC is slightly lower than that of VeriFinger_v13.1, it still shows substantial improvements over the baseline methods. The EER for LatentMinuComp_v0 is 0.290, which is higher than that of VeriFinger_v13.1, indicating a greater rate of errors. However, it maintains reasonable FMR values (FMR100 = 0.470, FMR20 = 0.420, FMR10 = 0.384), showcasing its capability to handle false matches effectively. MarkIDNet, ranked 3, has an AUC of 0.551, which is lower compared to the other methods. The higher EER (0.481) and higher FMR values (FMR100 = 0.838, FMR20 = 0.804, FMR10 = 0.766) reflect its challenges in minimizing errors and distinguishing between genuine and impostor matches. This indicates that MarkIDNet, while functional, has room for improvement in terms of accuracy and error minimization.
Comparing the submitted methods to the baseline methods provides valuable context for these improvements. MSU-LAFIS shows an AUC of 0.571, but its EER (0.475) is higher than the best-performing methods, indicating more errors. MCC has an AUC of 0.754, which is respectable but still lower than the submitted methods. VeriFinger_v12.3, a baseline method, performs well with an AUC of 0.808 and a low EER (0.228), but is surpassed by its updated version, VeriFinger_v13.1. The comparison clearly shows that the submitted methods have made significant improvements over the baseline methods. VeriFinger_v13.1 outperforms its predecessor, VeriFinger_v12.3, in key metrics, indicating technological advancements in the newer version. For instance, the AUC improved from 0.808 to 0.854. The EER also decreased from 0.267 to 0.228, reflecting better error management. LatentMinuComp_v0 shows improvements over both MSU-LAFIS and MCC. It achieves a higher AUC and lower EER compared to these baseline methods, indicating enhanced accuracy and reliability in fingerprint recognition. The FMR values for LatentMinuComp_v0 are also more favorable than those of the baseline methods, demonstrating better performance in minimizing false matches.
The above overall evaluation results are reported in Table 3. In summary, the submitted methods show clear improvements over the baseline methods, with VeriFinger_v13.1 leading in performance. LatentMinuComp_v0 also exhibits substantial advancements, while MarkIDNet, although improved over some baselines, still has the potential for further enhancement. These comparisons underscore the progress made in fingerprint recognition technology, as reflected by the submitted methods' superior performance metrics.
Table 3. Performance indicators measured on the LFIW database for the overall comparison experiments.
Comparison scores distribution
The score distribution graphs for submitted solutions and baseline methods are shown in Figure 1. It provides valuable insights into the performance of these methods in distinguishing between genuine and impostor fingerprint matches. LatentMinuComp_v0 assigns lower similarity scores to both genuine and impostor matches, with some overlap in the lower score range. MarkIDNet shows higher variability in genuine scores, indicating its capability to distinguish matches, but also assigns a wide range of scores to impostors, potentially leading to more false positives. VeriFinger_v13.1 exhibits a clear separation between genuine and impostor scores, indicating high accuracy and reliability with fewer false positives.
Comparing the submitted methods to the baseline methods reveals significant improvements. VeriFinger_v13.1 shows the best performance with clear separation between genuine and impostor scores, reflecting high accuracy and low error rates. LatentMinuComp_v0 also performs well, although it has more overlap between genuine and impostor scores than VeriFinger_v13.1. MarkIDNet shows room for improvement in minimizing false positives. Among the baseline methods, VeriFinger_v12.3 performs better than MSU-LAFIS and MCC, but is still surpassed by its newer version, VeriFinger_v13.1. This analysis underscores the advancements in fingerprint recognition technology, with the submitted methods showing more accurate and reliable performance.
Figure 1. Distributions of the overall comparison scores in the LFIW database. The baseline methods are on the left column and the submitted solutions are on the right column.
DET and ROC
The DET (in Figure 2 (a)) and ROC (in Figure 2 (b)) curves provide a comprehensive view of the performance of the three submitted solutions against baseline methods. VeriFinger_v13.1 exhibits the best performance, with the lowest FNMR across various FMR levels and the highest AUC (0.854450), indicating superior accuracy and reliability in fingerprint recognition. LatentMinuComp_v0 follows closely with a high AUC (0.791198) and favorable FNMR, though slightly less effective than VeriFinger_v13.1. MarkIDNet, while showing improvements over some baseline methods, has the lowest AUC (0.550888) among the submitted methods, reflecting more challenges in achieving high accuracy and minimizing errors.
Among the baseline methods, VeriFinger_v12.3 outperforms MSU-LAFIS and MCC, with a higher AUC (0.807618) and lower FNMR. However, the submitted methods, particularly VeriFinger_v13.1, demonstrate significant advancements over these baselines. This analysis underscores the progress in fingerprint recognition technology, highlighting the superior accuracy and reliability of the submitted methods compared to the baselines.Â
Figure 2. DET and ROC curve (in log scale) of the overall comparison experiments for the LFIW database.
Computational Cost
Table 4 shows the computational costs for the six evaluated methods. The CC for each method is calculated as the time taken to process a pair of fingerprints, encompassing the stages of pre-processing, feature extraction, comparison and so on. MSU-LAFIS has the highest CC at 4.47 seconds, indicating slower processing, while MCC is more efficient at 1.36 seconds. Among the submitted solutions, VeriFinger_v13.1 is the most efficient with a CC of 0.78 seconds, slightly better than VeriFinger_v12.3 at 0.79 seconds. LatentMinuComp_v0 and MarkIDNet exhibit CCs of 0.93 and 0.88 seconds, respectively, indicating they are also highly efficient. Overall, the submitted methods demonstrate superior computational efficiency compared to the baseline methods, highlighting advancements in optimizing fingerprint recognition algorithms for faster processing while maintaining accuracy.
Table 4. CC for submitted solutions and baseline methods.
Final Rank
Congratulations!