Dual camera for low-light photography

Computational photography using color-plus-mono dual camera

        - Color + Mono image fusion for low light enhancement

        - Method 1: Detail transfer

        - Method 2: Color transfer                    

[ Example of image enhancement ]

The original input image pair (color-plus-mono) is captured in a low light (6 lux) condition by our color-plus-mono dual camera. 

Method 1:  Selective Detail Transfer

Paper title: Enhancement of low light level images using color-plus-mono dual camera, Optics Express 

Paper link: https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-10-12029 

Abstract: In digital photography, the improvement of imaging quality in the low light shooting is one of the users’ needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. We propose a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

Below figures show examples of the test datasets captured in 6 lux condition and the results of their processing. We can observe that the proposed approach in Figs. 13(d), 14(d), and 15(d) can improve the image quality in terms of denoising and image sharpness. In particular, we can see characters in the images much more clearly when those images are processed with the proposed method (selective detail transfer). 

Fig. 13. A close-up of a color and mono image pair and its processing results. (a) Original color and mono image pair captured by a dual camera in 6 lux condition. (b) Histogram matching. (c) Individual guided filter. (d) Proposed approach (selective detail transfer).

Fig. 14. A close-up of a color and mono image pair and its processing results. (a) Original color and mono image pair captured in 6 lux condition. (b) Histogram matching. (c) Individual guided filter. (d) Proposed approach (selective detail transfer).


Method 2:  Color Transfer based on Deep Learning Model

Paper title: Deep color transfer for color-plus-mono dual cameras, Sensors 2020 

Paper link: https://www.mdpi.com/1424-8220/20/9/2743/htm 

Abstract: A few approaches have studied image fusion using color-plus-mono dual cameras to improve the image quality in low-light shooting. Among them, the color transfer approach, which transfers the color information of a color image to a mono image, is considered to be promising for obtaining improved images with less noise and more detail. However, the color transfer algorithms rely heavily on appropriate color hints from a given color image. Unreliable color hints caused by errors in the stereo matching of a color-plus-mono image pair can generate various visual artifacts in the final fused image. This study proposes a novel color transfer method that seeks reliable color hints from a color image and colorizes a corresponding mono image with reliable color hints that are based on a deep learning model. Specifically, a color-hint-based mask generation algorithm is developed to obtain reliable color hints. It removes unreliable color pixels using a reliability map computed by the binocular just-noticeable-difference model. In addition, a deep colorization network that utilizes structural information is proposed for solving the color bleeding artifact problem. The experimental results demonstrate that the proposed method provides better results than the existing image fusion algorithms for dual cameras.

Figure 11. Model architecture of the proposed colorization.

Figure 22. Visual comparison results of colorization performance. (a) Input image with a color hole. (b) Ground truth. (c) Levin colorization [12]. (d) Zhang model (l1 loss) [13]. (e) Zhang model (l1 + SSIM loss) [13]. (f) Proposed model.

[CVIP color and mono image database] 

Note that we constructed our test dataset for performance evaluation of the proposed approach using our dual camera, capturing sixteen pairs of color and mono images in low-light conditions, with a spatial resolution of 1328 x 1048 pixels. Importantly, the color and mono image pairs were generated to include various low light levels (from 10 lux to 4 lux). Seven scenes were captured in 10 lux condition, six scenes were captured in 6 lux condition, and three scenes were captured in 4 lux condition. 

Click for download!