LightNet: Generative Model of Enhancement of Low-Light Images

Abstract

In this work, we propose a generative model for enhancement of images captured in low-light conditions. Sensor constraints and inappropriate lighting conditions are accountable for degradations introduced in the image. The degradations limit the visibility of the scene and impedes vision in applications like detection, tracking and surveillance. Recently, deep learning algorithms have taken a leap for enhancement of images captured in low-light conditions. However, these algorithms fail to capture information on fine grained local structures and limit the performance. Towards this, we propose a generative model for enhancement of low-lit images to exploit both local and global information, and term it as LightNet. In proposed architecture LightNet, we include a hierarchical generator encompassing encoder-decoder module to capture global information and a patch discriminator to capture fine grained local information. Typically, the encoder-decoder module downsamples the low-lit image into distinct scales. Learning at distinct scales helps to capture both local and global features thereby suppressing the unwanted features (noise, blur). With this motivation, we downsample the captured low-lit image into 3 distinct scales. The decoder upsamples the encoded features at respective scales to generate an enhanced image. We demonstrate the results of proposed methodology on custom and benchmark datasets in comparison with SOTA methods using appropriate quantitative metrics.


Proposed Dataset 

Figure: Images captured in low-light conditions, 1st column shows ground-truth image (captured in auto settings), 2nt column shows images capture with ISO50, 3rd column shows images captured with ISO100 , 4th column shows images capture with ISO200, 5th column shows images captured with ISO400, 6th column shows images captured with ISO800, 7th column shows images captured with ISO3200. Exposure is fixed across the rows (1strow : 1/24000s, 2ndrow : 1/4000s, 3rdrow : 1/2000s, 4throw : 1/180s, 5throw : 1/20s, 6throw : 1/2s


Results of our proposed LightNet on Enhancement of Images Captured in Low-Lit Conditions

Input image

Enhanced Images using LightNet

Input image

Enhanced Images using LightNet

Input image

Enhanced Images using LightNet

Input image

Enhanced Images using LightNet

Table 1: Results of proposed methodology in comparison with State-of-the-art-methods using PSNR and SSIM (Av- erage across the selected data on SID dataset [3]) as a ref- erence based quantitative metric. Last row corresponds to results of proposed LightNet (Represented in bold).


Table 2: Results of proposed methodology on Custom dataset in comparison with state-of-the-art methods using PSNR (in dB) metric. To demonstrate the robustness of the proposed methodology, we consider the camera settings with ISO50 and ISO100 under Exposure 1/2s, 1/180s, and 1/24000s (Extreme low-light condition). The cells high- lighted in ”.” represent the highest values, and the cells highlighted in ”.” represent the second best values.

Note: Enlighten Anything model [33] fails to segment the scene effectively under extreme low-light conditions (Ex- posure 1/240000s + ISO50) leading to failure in low-light image enhancement.

Questions?

Contact nikhil.akalwadi@kletech.ac.in to get more information on the project

BibTeX

@InProceedings{Desai_2023_ICCV, author = {Desai, Chaitra and Akalwadi, Nikhil and Joshi, Amogh and Malagi, Sampada and Mandi, Chinmayee and Tabib, Ramesh Ashok and Patil, Ujwala and Mudenagudi, Uma}, title = {LightNet: Generative Model for Enhancement of Low-Light Images}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {2231-2240} }