Deep Learning-based Lensless Image Reconstruction

Lensless cameras can be built with a very small form factor at a considerably low cost, by replacing lens modules with computation. However, image reconstruction in lensless cameras uses iterative optimization algorithms which can be slow and resource-demanding. In this work, we propose a deep neural network architecture for image reconstruction in a lensless camera. Our network achieves high reconstruction performance with fast processing time.


Lensless Imaging with an End-to-End Deep Neural Network

We propose an encoder-decoder -shaped-network for restoring the original scene from a measurement with a phase-mask-based lensless camera. We demonstrate the improved image quality and fast reconstruction speed compared to the image reconstruction methods using iterative optimization algorithms.

Overall, the results of our network show high performance. Our reconstructed images show fewer artifacts, more accurate colors, and more uniform brightness across the image whereas Gradient Descent(GD) and ADMM generate regular artifacts and non-uniform brightness. Also, our network takes 6,611 times faster than reconstruction by the GD algorithm, 7,316 times faster than by ADMM. The reconstruction is fast enough to use for real-time imaging applications. Even with limited computing resources (Nvidia Jetson Nano, GPU 128-core Maxwell), the reconstruction takes about 130ms.

Image Reconstruction in Lensless Cameras with Unrolled Optimization Algorithms

Image reconstruction models based on deep learning have been demonstrated, but they typically require a large dataset for training. We propose unrolled optimization algorithms with trainable parameters for lensless image reconstruction problems, where higher quality images can be restored at a faster computing time. 

In unrolled ADMM, a fixed number of iterations are from a classic ADMM. Each iteration is interpreted as a layer in a deep network. In this work, we use two structures of unrolled ADMM. ADMM-RNN is a method of sharing parameters for each depth. In the ADMM-RNN structure, the parameters to be learned are only 4 parameters. Due to the small number of parameters, it can train the network using a small dataset. ADMM-DNN has each parameter for each depth. In the case of ADMM-DNN, there are 4 x depth parameters to learn. For this reason, it can get a higher quality of reconstructed image in less depth than ADMM-RNN.

All unrolled ADMM structures restore the higher quality images in a shorter time than classic ADMM 100 times. When at the same depth, ADMM-DNN has a higher image quality than ADMM-RNN. In unrolled ADMM networks, the reconstruction speed increases linearly with depth. The quality of the reconstructed image also increases with the depth, but with diminishing returns. Considering the reconstruction time and overall performance, ADMM-DNN with 8 depths has the highest reconstruction efficiency.

References :