Single Image Dehazing via Multi-Scale Convolutional Neural Networks
Wenqi Ren1,3, Si Liu2, Hua Zhang2, Jinshan Pan3, Xiaochun Cao1, and Ming-Hsuan Yang3
1Tianjin University, 2IIE, CAS, 3University of California, Merced
Abstract
The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.
(a) Input (b) He et al. CVPR2009 (c) Tang et al. CVPR2014 (d) Our
Figure 1. Sample image dehazed results on a real input. The recovered image in (d) has rich details and vivid color information.
Model
Figure 2. (a) Main steps of the proposed single-image dehazing algorithm. For training the multi-scale network, we synthesize hazy images and the corresponding transmission maps based on depth image dataset. In the test stage, we estimate the transmission map of the input hazy image based on the trained model, and then generate the dehazed image using the estimated atmospheric light and computed transmission map. (b) Proposed multi-scale convolutional neural network. Given a hazy image, the coarsescale network (the green dashed rectangle) predicts a holistic transmission map and feeds it to the fine-scale network (the orange dashed rectangle) in order to generate a refined transmission map.
Visual Comparisons
(a) Input (b) Tarel and Hautiere ICCV2009 (c) He et al. CVPR2009 (d) Nishino et al. IJCV2012
(e) Meng et al. ICCV2013 (f) Zhu et al. TIP 2015 (g) Tang et al. CVPR2014 (f) Our
(a) Input (b) He et al. CVPR2009 (c) Meng et al. ICCV2013 (d) Our
(a) Input (b) He et al. CVPR2009 (c) Tang et al. CVPR 2014 (d) Our
(a) Input (b) Tarel and Hautiere ICCV2009 (c) Meng et al. ICCV2013 (d) Tang et al. CVPR 2014 (e) Our
(a) Input (b) He et al. CVPR2009 (c) Meng et al. ICCV2013 (d) Tang et al. CVPR 2014 (e) Our
(a) Input (b) He et al. CVPR2009 (c) Meng et al. ICCV2013 (d) Tang et al. CVPR 2014 (e) Our
Paper
[Paper]
More results
[Supp]
Code
MATLAB CODE
If you have any problems, please feel free to contact me via email (rwq.renwenqi@gmail.com).
Thanks for Dishank Bansal's implementation Github-Tensorflow
Dataset
Based on the hazy image synthesizing method in this paper, we have synthesize a large dataset for fair comparison. The details can be found in RESIDE.
If you have any problems, please feel free to contact me via email (rwq.renwenqi@gmail.com).
Citation
@inproceedings{Ren-ECCV-2016,
author = {Ren, Wenqi and Liu, Si and Zhang, Hua and Pan, Jinshan and Cao, Xiaochun and Yang, Ming-Hsuan},
title = {Single Image Dehazing via Multi-Scale Convolutional Neural Networks},
booktitle = {European Conference on Computer Vision},
year = {2016}
}
@article{li2017reside,
title={RESIDE: A Benchmark for Single Image Dehazing},
author={Li, Boyi and Ren, Wenqi and Fu, Dengpan and Tao, Dacheng and Feng, Dan and Zeng, Wenjun and Wang, Zhangyang},
journal={arXiv preprint arXiv:1712.04143},
year={2017}
}