Learning Dual Convolutional Neural Networks 

for Low-Level Vision

       Jinshan Pan    Sifei Liu    Deqing Sun    Jiawei Zhang    Yang Liu     Jimmy Ren      

Zechao Li      Jinhui Tang      Huchuan Lu     Yu-Wing Tai     Ming-Hsuan Yang

Figure 1. Visual comparisons of super-resolution results by the VDSR method [3] (x4) with structures recovered by different methods, i.e., nearest neighbor, bilinear, and bicubic upsampling. Residual learning algorithms usually take upsampled image as the base structures and learn the details, the difference between the upsampled and ground truth images. However, residual learning cannot correct low-frequency errors in the structures, e.g., the structure obtained by nearest neighbor interpolation in (c). In contrast, our algorithm is motivated by the decomposition of a signal into structures and details, which involves both structure and detail learning and thus leads to better results.


In this paper, we propose a general dual convolutional neural network (DualCNN) for low-level vision problems, e.g., super-resolution, edge-preserving filtering, deraining and dehazing. These problems usually involve the estimation of two components of the target signals: structures and details. Motivated by this, our proposed DualCNN consists of two parallel branches, which respectively recovers the structures and details in an end-to-end manner. The recovered structures and details can generate the target signals according to the formation model for each particular application. The DualCNN is a flexible framework for low-level vision tasks and can be easily incorporated into existing CNNs. Experimental results show that the DualCNN can be effectively applied to numerous low-level vision tasks with favorable performance against the state-of-the-art methods.

Proposed Framework

Figure 2. Proposed DualCNN model. It contains two branches, Net-D and Net-S, and a problem formulation module. A DualCNN first estimates the structures and the details and then reconstructs the final results according to the formulation module. The whole network is end-to-end trainable.

Technical Paper and Codes

Jinshan Pan, Sifei Liu, Jiawei Zhang, Yang Liu, Jimmy Ren, Zechao Li, Jinhui Tang, Huchuan Lu, Yu-Wing Tai, and Ming-Hsuan Yang, "Learning Dual Convolutional Neural Networks for Low-Level Vision", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.


     Trained models and code 


[1] T.-Y. Lin, A. Roy Chowdhury, and S. Maji, "Bilinear CNN models for fine-grained visual recognition", ICCV, 2015.

[2] C. Dong, C. C. Loy, K. He, and X. Tang. "Learning a deep convolutional network for image super-resolution", ECCV, 2014.

[3] J. Kim, J. K. Lee, and K. M. Lee, "Accurate image super-resolution using very deep convolutional networks", CVPR, 2016.