Participation
Terms and Conditions
You agree not to distribute the Imperial Light-Stage Head (ILSH) dataset without prior written permission. The ILSH dataset is copyright by Imperial College London. You may not use the material for commercial purposes. Please read the End User License Agreement.
Participation Guidelines
To gain access to the challenge, you need to follow these steps:
Sign the End User License Agreement for accessing the Imperial Light-Stage Head (ILSH) Dataset and email to s.zafeiriou@imperial.ac.uk , jiali.zheng@imperial.ac.uk. You will be shared with a password for downloading the dataset once approved. Please invite the team lead(supervisor) to sign the agreement.
Fill in the form for accessing the challenge at the link: https://forms.office.com/e/rJRvk9mHFJ. The challenge organizers will review the information inserted in the form and grant you access to the challenge. Please note that both steps above need to be successfully completed to gain access to the challenge. Also, please keep in mind that: i) Only institutional email addresses are allowed - general email addresses (e.g., @gmail, @hotmail, etc.) will not be accepted. 2) The team lead cannot be a student. Only faculty/line managers can be specified as team leads.
The View Synthesis Challenge for Human Heads is hosted on CodaLab. Please find details of challenge participation, result submission and evaluation there.
Imperial Light-Stage Head (ILSH) Dataset Overview
52 Subjects: This dataset includes 52 participants, each represented in a separate folder. (Data for two subjects will be provided as a toy example.)
24 Cameras: A total of 24 cameras were utilized to capture images of the participants.
1,248 Images: The dataset contains 1,248 4K resolution (width: 3000 and height: 4096) images of the 52 subjects, captured using the 24 cameras.
1,248 Masks: Border region masks are present due to calibration-based image undistortion preprocessing applied to the dataset. We recommend using these masks to exclude the border region in training.
1,248 Blender and LLFF-Type Poses: Each subject folder includes a Blender-type pose (transforms.json) and an LLFF-type pose (pose_bounds.npy), with subject-dependent 24 poses (with additional information) in each file. In total, there will be 52 pairs of these files provided in the respective subject folders.
* Note: Through the challenge, you will be given a subset of the dataset for training. Validation and test view ground truth images will be hidden and results will only be given through CodaLab submission. We release 2 subjects with 24 view ground truth images and input poses as a toy example of this dataset. Bear in mind that the difficulties are not only coming from camera viewpoint (its sparsity) but also from subject appearances, so get a validation score with CodaLab submission.
Please find links to download dataset in the CodaLab webpage
Prizes and Awards
1st Prize: 3000 USD
2nd Prize: 2000 USD
3rd Prize: 1000 USD