ICIP 2023 - Point Cloud Visual Quality Assessment 

Grand Challenge 

Description 

Point cloud (PC) is an essential data format for the transmission and storage of immersive visual content and 3D visual data. They are employed in various applications, including virtual and mixed reality, autonomous driving, construction, cultural heritage, etc. Point clouds represent 3D data as a set of points with (x,y,z) coordinates, also referred to as the point cloud geometry, and associated attributes, such as colors, normals, and reflectance. Furthermore, depending on whether the PC includes a temporal dimension, we can further distinguish between static and dynamic point clouds. 

The number of points in a PC can easily range in the order of millions and can feature complex attributes. To deal with the significant transmission bandwidth and storage space required by point clouds, a significant amount of work has been done to develop efficient point cloud compression (PCC) tools in the past few years. Specifically, the Moving Picture Experts Group (MPEG) has recently standardized two PCC standards: geometry-based PCC (G-PCC), which operates directly in the 3D domain, and video-based PCC (V-PCC), which is based on 2D projections and supports dynamic content (video). Furthermore, inspired by the success of learning-based image and video compression, several recent PCC approaches use variational auto-encoders and other deep neural network architectures to code efficiently the geometry and/or attributes of point clouds. Depending on the coding approach, the decoded point clouds can exhibit different visual artifacts. 

All the lossy compression approaches mentioned above may introduce significant visual distortion, which calls for effective methods to quantify the quality of experience of compressed point clouds. More generally, point cloud quality metrics are essential to optimize and benchmark processing algorithms, such as coding, denoising, super-resolution, etc. Several PC quality metrics have been proposed in the past years, which can either work directly in the 3D domain (e.g., point-to-point or point-to-plane distortion metrics) or use 2D projections together with conventional 2D quality metrics.  In either case, the ability of these metrics to predict subjective opinion scores is still far from being well assessed. One of the factors that have somehow limited the advances in point cloud quality assessment has been the lack of large and diverse datasets of distorted point clouds with subjective annotations, which would enable challenging the existing and novel PC quality metrics. In this grand challenge, we provide a large dataset obtained by 75 pristine point clouds spanning several semantic classes and obtained with different acquisition devices. Participants will have the chance to test their algorithms on this new dataset and benchmark their approaches with respect to competitors and state-of-the-art methods.

--

Contributors to the challenge are invited to submit a challenge paper to ICIP 2023 (deadline: 26 April 2023). The best submissions will also have the chance to submit an extended version of their paper to a Special Issue that we will organize in Elsevier Signal Processing: Image Communication (details will be provided soon).

Questions?

Do you have any questions? Feel free to contact us via the email below!

Please check the "Tracks and Evaluation Criteria" page 

to find the links to the CodaLab pages.