The Challenge seeks contributions on the topic of smartphone image deblurring using depth maps acquired by the same smartphone as a guidance. The task is to deblur real low-light images taken by an Apple iPhone 15 Pro, using both the blurred image and the co-registered depth map produced by the onboard Lidar sensor. The deblurred images will be compared to registered ground truth sharp images by means of the LPIPS perceptual quality metric.
Training and validation data are provided from a novel dataset of low-light iPhone images affected by noise and motion blur, with a registered Lidar map and a sharp ground truth image. These images are the most similar to the test images. Participants may also use the ARKitScenes dataset to pretrain their models by simulating motion blur.
Link to the training and test data:
https://www.dropbox.com/scl/fo/6gn4yrvunr9gt5ai50y7t/ABbVNzLure_pUqzoxe3bJhs?rlkey=7exk9tkg7pqa0axyoahf10fp8&st=k03uzg60&dl=0
The training/validation set consists of real low-light images affected by motion blur, acquired with an iPhone 15 Pro. Each image has a registered depth map produced by the Lidar sensor on the iPhone 15 Pro. There are a total of 45 images correposonding to 512x512 regions of interest, with one directory per image. Each image directory is composed of three subdirectories:
rgb: it contains the blurred RGB image;
depth: it contains the Lidar depth map registered to the RGB image after bicubic upsampling to match image resolution;
gt: it contains the sharp ground truth.
The test set is composed of 15 images similar to the training/validation images with corresponding depth maps. Participants can also use the ARKitScenes dataset for pretraining models, although motion blur needs to be simulated in that case
Codalab link: https://codalab.lisn.upsaclay.fr/competitions/22066?secret_key=51be7576-2425-4d9e-bbab-f1d2e07e0320
Join the challenge by registering on the Codalab platform (https://codalab.lisn.upsaclay.fr) and registering to the “Lidar-guided Image Deblurring Challenge” at the link above.
The testing data provide some blurred images and you will need to return the corresponding deblurred image by exploiting the image itself and the Lidar depth map.You will need to submit a zip file with the deblurred images, following the same names as the original ones.
The final score is computed using the LPIPS perceptual metric between the deblurred images and the ground truth sharp ones. The leaderboard displays the current ranking of submissions by different teams.
The submission MUST be a zip file called predictions.zip with as many .png files inside as the number of test images. Deblurred images must be saved as 8-bit PNGs for the submission zip.
All submissions must be uploaded to Codalab before July 31.
The top 3 teams will be invited to present their solution at the workshop. A video presentation explaining the method used in the submissions must be sent to the organizers before August 31. The presentation must be at most 15 minutes long. It can be a recording of powerpoint slides with a voice-over explanation.
The teams are responsible for registration to the EUSIPCO conference and travel arrangements to attend the workshop. In case of impossibility to attend the workshop in person, the organizers will play the video presentation.
March 15, 2025: challenge opens
July 31, 2025: codalab submission deadline
August 31, 2025: submission of solution presentation deadline
September 12, 2025: Depth-guided Image Processing Workshop