A high-fidelity multi-view human head dataset designed for sparse-view novel view synthesis and realistic avatar modeling
What is ILSH?
ILSH is a high-quality dataset for human head view synthesis, consisting of:
52 identities
24 synchronized camera viewpoints
4K resolution images (3000 × 4096)
Calibrated camera poses and masks
The dataset is captured using a geodesic light-stage system, ensuring consistent illumination and accurate geometry across all subjects.
High-Fidelity Capture
Consistent lighting and precise calibration enable photorealistic rendering.
Sparse-View Setup
Carefully designed viewpoint selection creates realistic reconstruction challenges.
Multi-View Consistency
All images are synchronized and geometrically aligned.
Ready for Research
Includes camera parameters, masks, and standardized data splits.
Why it matters:
Recent advances in neural rendering have shown strong performance under dense-view settings. However, real-world applications often require reconstruction from limited viewpoints. ILSH is designed to bridge this gap by providing a challenging yet realistic sparse-view benchmark.
Why Sparse Views?
Unlike dense-view datasets, which rely on uniformly distributed cameras and simplified visibility, ILSH is captured with a limited set of strategically placed viewpoints. This preserves realistic geometry and occlusion patterns that cannot be replicated by simply subsampling dense data, making ILSH a more faithful and challenging benchmark for real-world sparse-view reconstruction.
Capture Setup:
Geodesic light-stage dome: diameter of 2.5m.
24 machine vision cameras (Balser boA4112-68cc)
Uniform white illumination: 82 light sources with high-power RGBW LEDs (OSRAM OSTAR)
Synchronized image acquisition
Data Includes:
1,248 Images: The dataset contains 1,248 4K resolution (width: 3000 and height: 4096) images of the 52 subjects, captured using the 24 cameras.
1,248 Camera Poses (in multiple formats): Each subject folder contains two formats of camera pose files: one in a Blender-compatible data loader format (transforms.json) and one in an LLFF-compatible data loader format (pose_bounds.npy). Each file has 24 subject-dependent poses with additional information. In total, there will be 52 pairs of these files in their respective subject folders.
1,248 Masks: Due to calibration-based image undistortion preprocessing applied to the dataset, some image pixels in the borders will remain empty. We provide this binary border region masks resulting from this calibration, with a value of 1 for non-empty pixels and 0 for empty pixels. We recommend using these masks to exclude empty border regions during training.
Train / validation / test splits
Example of 1 subject:
Total subjects: 52
ILSH supports a wide range of research tasks:
Novel view synthesis
Sparse-view 3D reconstruction
Human head avatar generation
Generalization across identities
VSCHH Benchmark:
ILSH was used in the View Synthesis Challenge for Human Heads (VSCHH), providing a standardized benchmark for evaluating methods under sparse-view conditions.
Access Policy:
Due to ethical considerations and the presence of facial data, access is provided under a controlled license.
Sign the End User License Agreement for accessing the Imperial Light-Stage Head (ILSH) Dataset and email to s.zafeiriou@imperial.ac.uk , jiali.zheng@imperial.ac.uk. You will be shared with a link for downloading the dataset once approved.
Only institutional email addresses are allowed - general email addresses (e.g., @gmail, @hotmail, etc.) will not be accepted.
Please ask your supervisor to sign the EULA. Only faculty or line managers are authorized to sign, not students.
A Starter Kit has been provided which includes supporting scripts for tasks such as re-structuring the downloaded dataset, loading data, and evaluating results. We hope to help researchers lower the barrier to using our datasets to develop their methods.
Further details of the dataset are provided in the accompanying dataset paper.
The ILSH Dataset along with its corresponding derivatives are used for non-commercial research and education purposes only. You agree not copy, sell, trade, or exploit the dataset for any commercial purposes. In any published research using the data, you cite the following paper:
@inproceedings{zheng2023ilsh,
title={Ilsh: The imperial light-stage head dataset for human head view synthesis},
author={Zheng, Jiali and Jang, Youngkyoon and Papaioannou, Athanasios and Kampouris, Christos and Potamias, Rolandos Alexandros and Papantoniou, Foivos Paraperas and Galanakis, Efstathios and Leonardis, Ale{\v{s}} and Zafeiriou, Stefanos},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={1112--1120},
year={2023}
}