In this paper, we propose a hierarchical framework for image denoising and term it Hierarchical Noise-Deinterlace Net (HNN). Image denoising techniques aim to recover clean images from noisy observations by reducing unwanted noise and artifacts to enhance the clarity and introduce spatial coherence. Images captured during challenging scenarios suffer from granular noise, inducing fine-scale variations in the image, which occur due to the limitations of imaging technology or environmental conditions. This granular noise can significantly degrade the quality of the image, making it less useful for applications like object detection, image restoration/enhancement, face detection, and image super-resolution. From literature, we infer learning global-local features significantly contribute in reducing unwanted noise and artifacts within images. Typically, researchers rely on residual learning, Generative Adversarial Networks (GANs), and Attention Mechanisms to learn global-local features. However, these methods face challenges such as vanishing gradients, limited generalization of generators in GANs, lack of global context awareness and computation complexity in attention mechanisms leading to drop in performance. Towards this, we propose a hierarchical framework to process both global and local information across distinct levels of hierarchy. More specifically, we propose a hierarchical encoder-decoder network, with a distinct Global-Local Spatio-Contextual (GLSC) block for learning of fine-grained features and high-frequency details in an image. The proposed framework improves image denoising, as it allows the model to capture and utilize information from different scales, ensuring a comprehensive understanding of the image content. We demonstrate the efficacy of proposed HNN framework, on benchmark datasets in comparison with state-of-the-art methods with 5% (↑ in dB) increase in performance.
Figure 3. Qualitative comparisons of HNN framework with SOTA methods on DND [23] dataset. The left most image is an input noisy image. The zoomed-in view of the highlighted area is shown in 1st row (Noisy input). We observe, the proposed HNN consistently preserves fine spatial and contextual information in the denoised images.
Figure 6. Qualitative comparisons of HNN with SOTA methods on McMaster [42] dataset. The above image is an input noisy im- age. The following images represent comparison of SOTA meth- ods with HNN. We observe, the proposed HNN preserves fine spatial-contextual information in the denoised images.
Figure 5. Qualitative comparisons of HNN with SOTA methods on Kodak24K [27] dataset. The above image is an input noisy im- age. The following images represent comparison of SOTA meth- ods with HNN. We observe, the proposed HNN preserves fine spatial-contextual information in the denoised images.
Table 1. Performance comparisons of HNN framework with SOTA methods on CBSD68 [21], Kodak24K [27], and McMaster [42] datasets with varying levels of σ. Cells highlighted in . represents highest values and, cells highlighted in . represents second highest values respectively.
Contact nikhil.akalwadi@kletech.ac.in to get more information on the project
@InProceedings{Joshi_2024_CVPR,
author = {Joshi, Amogh and Akalwadi, Nikhil and Mandi, Chinmayee and Desai, Chaitra and Tabib, Ramesh Ashok and Patil, Ujwala and Mudenagudi, Uma},
title = {HNN: Hierarchical Noise-Deinterlace Net Towards Image Denoising},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2024},
pages = {3007-3016}
}