DMPHN-v2-deblur

Abstract

Dynamic scene deblurring is a significant technique in the field of computer vision. The multi-scale strategy has been successfully extended to deep end-to-end learning-based deblurring tasks. Its expensive computation gives rise to the multi-patch framework. The success of the multi-patch framework benefits from the local residual information passed across the hierarchy. One problem is that the finest levels rarely contribute to their residuals so that the contribution of the finest levels to their residuals are excluded by coarser levels, which limits the deblurring performance. To this end, we propose a nested block attention module using attention mechanism, whose powerful and complex representation ability is taken advantage of to improve the architecture of encoder-decoder used in the multi-patch model. Our modification not only enables network to boost the contribution of the finest levels to their residuals but also learn different weights for feature information extracted from spatially-varying blur image. Extensive experiments show that the improved network achieves competitive performance on the GoPro dataset according to PSNR and SSIM, and is slightly superior to the original version.

Comparison with SOTA methods

Intermediate outputs of residuals

For the same scene, the first row present the results of DMPHN [35] and ours(DMPHN-v2) are list in the second row . Images from right to left present finest level(R4) to coarsest level(R1) respectively .

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

level 1

level 2

level 3

level 4

Code

Cite

Z. Zhao, B. Xiong, S. Gai and L. Wang, "Improved Deep Multi-Patch Hierarchical Network With Nested Module for Dynamic Scene Deblurring," in IEEE Access, vol. 8, pp. 62116-62126, 2020, doi: 10.1109/ACCESS.2020.2984002.