Published Date : 10/30/2025Â
IDnow, a prominent identity verification provider, has reported notable progress in reducing algorithmic bias in facial recognition systems. This advancement is a result of the company's participation in the EU-funded MAMMOth (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) project. The MAMMOth project, supported by Horizon Europe, brought together leading academic and industry partners to tackle fairness in artificial intelligence across multiple modalities.
The company's efforts were spurred by the widely-cited 2018 “Gender Shades” study from MIT Media Lab, which highlighted the need for more inclusive data and better model calibration. Biometric testing by the National Institute of Standards and Technology (NIST) has consistently found that the majority of facial recognition algorithms are more likely to misidentify people with darker skin, women, and the elderly. However, the most accurate algorithms show very low differentials in the Institute’s latest testing.
As part of the MAMMOth project, IDnow focused on identifying and mitigating bias in its facial recognition algorithms. One key challenge was the variation in skin tone representation caused by ID photo color adjustments, which can distort comparisons between selfies and official documents. To address this, IDnow applied a style transfer technique to diversify its training data, improving model resilience and reducing bias toward darker skin tones. Other tools developed by IDnow address bias at different levels, such as biometric matching algorithms.
The results were significant: verification accuracy increased by 8 percent, even while using only 25 percent of the original training data volume. The accuracy gap between lighter and darker skin tones was cut by more than 50 percent. The enhanced AI model was integrated into IDnow’s identity verification platform in March 2025 and has been in active use since.
“Research projects like MAMMOth are crucial for closing the gap between scientific innovation and practical application,” says Montaser Awal, director of AI and ML at IDnow. “By collaborating with leading experts, we were able to further develop our technology in a targeted manner and make it more equitable.”
IDnow plans to adopt the MAI-BIAS open-source toolkit developed during the project to evaluate fairness in future AI models. This will allow the company to document biometric bias mitigation efforts and ensure consistent standards across markets. “Addressing bias not only strengthens fairness and trust but also makes our systems more robust and adoptable,” adds Awal. “This will raise trust in our models and show that they work equally reliably for different user groups across different markets.”
The MAMMOth project, which ran for 36 months and concluded this month, aimed to tackle gender, race, and other biases in AI. It is part of a broader European Union injunction that prohibits discrimination in EU law. As AI becomes more ubiquitous in various domains, including health, education, justice, personal security, and work, MAMMOth sought to identify a list of characteristics that are not protected under the law but which have been shown to lead to bias in AI systems. These characteristics could include school grades, living situation, disability, age, native language, dialect, sexuality, and socioeconomic status.
Since AI trains on available data, biases could become further entrenched. For example, psychology studies disproportionately draw participants from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies, which may not accurately reflect the responses of individuals from diverse cultural backgrounds. More information on these characteristics can be found on the MAMMOth website.
By addressing these biases, IDnow and the MAMMOth project are paving the way for more equitable and reliable AI systems, ultimately fostering greater trust and adoption in various sectors.Â
Q: What is the MAMMOth project?
A: The MAMMOth project (Multi-Attribute, Multimodal Bias Mitigation in AI Systems) is an EU-funded initiative that aims to tackle fairness in artificial intelligence across multiple modalities. It brings together leading academic and industry partners to address bias in AI systems.
Q: How did IDnow reduce bias in facial recognition?
A: IDnow applied a style transfer technique to diversify its training data and improve model resilience, reducing bias toward darker skin tones. They also developed tools to address bias at different levels, such as biometric matching algorithms.
Q: What were the results of IDnow's efforts?
A: Verification accuracy increased by 8 percent, even while using only 25 percent of the original training data volume. The accuracy gap between lighter and darker skin tones was cut by more than 50 percent.
Q: Why is addressing bias in AI important?
A: Addressing bias in AI strengthens fairness and trust, making systems more robust and adoptable. It ensures that AI models work reliably for different user groups across various markets.
Q: What is the MAI-BIAS open-source toolkit?
A: The MAI-BIAS open-source toolkit is a resource developed during the MAMMOth project to evaluate fairness in future AI models. It helps companies document biometric bias mitigation efforts and ensure consistent standards across markets.Â