Adapting Foundation Models for Face Recognition (IJCB-AFMFR-2026)
At IJCB 2026, Rome, Italy
Adapting Foundation Models for Face Recognition (IJCB-AFMFR-2026)
At IJCB 2026, Rome, Italy
The aim of this competition is to systematically benchmark, evaluate and compare adaptation strategies for foundation models for the downstream task of FR within a privacy-friendly framework. By providing a standardized evaluation protocol and metrics, the competition will highlight the strengths and limitations of different approaches, including their ability to generalize across diverse datasets.
The competition will feature two tracks:
In the first track, participants will be provided with the full training dataset.
In the second track, only a small subset of the data may be used to adapt the models.
The results are expected to guide future research and encourage the development of effective, data-efficient adaptation methods for foundation models in FR.
The final competition paper will be submitted to IEEE/IAPR IJCB 2026 and the top-performing teams will be invited as co-authors.
09.03.2026: Website is now live!
09.03.2026: Call for participation, website release and registration open
18.03.2026: Release of training data instructions
10.05.2026: Deadline for algorithm submission
20.05.2026: Announcement of results to participants
30.05.2026: Submission of competition paper to IJCB 2026
For this session, we employ synthetic training datasets. For the first track, participants are allowed to use the full dataset, while for the second track, a preselected subset will be provided for a separate low-data availability track. The training data will be provided after registration (see the registration procedure).
Participants are required to exclusively use the provided training datasets for model adaptation and are not allowed to incorporate any external data. The data is labeled by identity and is exclusively synthetic.
Registration for the competition can be completed by filling out the following form: Registration Form
Training data for both tracks will be available after registration. The download link will be sent to participants via email.
*To insure competition fairness, participating teams are not allowed to include team members affiliated with any of the organizing institutes (i.e. Fraunhofer IGD, TU Darmstadt, and NTNU).
Instructions for submission, along with details about the data and test code, are available on the competition github page: GitHub Link. (Release on 15.03.2026)
In this competition, participants are restricted to using the Contrastive Language–Image Pretraining (CLIP) foundation model. Participants are restricted to use the pretrained ViT-B/16 variant of CLIP. Participants are free to use both encoders or only the image encoder, depending on their approach.
Submissions must be provided as a ZIP file containing one or two pretrained models, depending on the selected track. Participants are free to enter Track 1, Track 2, or both tracks. Teams may upload their adapted models as a ZIP file to a cloud provider of their choice, provided it is accessible in Germany without requiring account registration.
The top-performing models will be retrained and reevaluated by the competition organizers. Please note that the evaluation data will not be released to participants. All submissions should adhere strictly to the instructions to ensure fair and consistent evaluation.
Participants are restricted to use the pretrained ViT-B/16 variant of CLIP.
The final submitted model must have the exact same architecture as the CLIP ViT-B/16 architecture.
Participants may use the image encoder alone or both the image and text encoders of the CLIP ViT-B/16 foundation model.
The use of external models other than CLIP ViT-B/16 (image and text encoders) is not allowed. This also prohibits the use of, for example, pretrained face recognition models for knowledge distillation.
No external setup, installation, or internet access is allowed at runtime during the evaluation.
tahar.chettaoui@igd.fraunhofer.de
fadi.boutros@igd.fraunhofer.de
yusq@sustech.edu.cn
vitomir.struc@fe.uni-lj.si
naser.damer@igd.fraunhofer.de
In case of questions or clarifications please contact: Tahar Chettaoui (tahar.chettaoui@igd.fraunhofer.de) or any of the organizers.