We present an effective and efficient face deblurring algorithm by exploiting semantic cues via deep convolutional neural networks (CNNs). As face images are highly structured and share several key semantic components (e.g., eyes and mouths), the semantic information of a face provides a strong prior for restoration. As such, we propose to incorporate global semantic priors as input and impose local structure losses to regularize the output within a multi-scale deep CNN. We train the network with perceptual and adversarial losses to generate photo-realistic results and develop an incremental training strategy to handle random blur kernels in the wild. Quantitative and qualitative evaluations demonstrate that the proposed face deblurring algorithm restores sharp images with more facial details and performs favorably against state-of-the-art methods in terms of restoration quality, face recognition and execution speed.
Technical Papers and Codes
Testing Datasets [Helen] [CelebA] [Read me]
Trainning dataset [Kernel]
Our Deblur Results [Helen] [CelebA]
Quantitative evaluation on different sizes of blur kernels on the Helen dataset.
Quantitative evaluation on different sizes of blur kernels on the CelebA dataset.
Quantitative evaluation on face identity distance.
 D. Krishnan, T. Tay, and R. Fergus. Blind deconvolution using a normalized sparsity measure. In CVPR, 2011.
 J. Pan, Z. Hu, Z. Su, and M. Yang. Deblurring face images with exemplars. In ECCV, 2014.
 Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. ACM TOG (Proceedings SIGGRAPH), 27(3):73:1–73:10, 2008.
 L. Xu, S. Zheng, and J. Jia. Unnatural L0 sparse representation for natural image deblurring. In CVPR, 2013.
 S. Cho and S. Lee. Fast motion deblurring. ACM TOG (Proceedings of SIGGRAPH Asia), 28(5):145:1–145:8, 2009.
 Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In ICCV, 2015.
 S. Nah, T. Hyun Kim, and K. Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In CVPR, 2017.