Learning a No-Reference Quality Metric for Single-Image Super-Resolution

Chao Ma Chih-Yuan Yang Xiaokang Yang Ming-Hsuan Yang

Shanghai Jiao Tong University University of California at Merced

1. Abstract

Numerous single-image super-resolution algorithms have been proposed in the literature, but few studies address the problem of performance evaluation based on visual perception. While most super-resolution images are evaluated by full-reference metrics, the effectiveness is not clear, and the required ground-truth images are not always available in practice. To address these problems, we conduct human subject studies using a large set of super-resolution images and propose a no-reference metric learned from visual perceptual scores. Specifically, we design three types of low-level statistical features in both spatial and frequency domains to quantify super-resolved artifacts, and learn a two-stage regression model to predict the quality scores of super-resolution images without referring ground-truth images. Extensive experimental results show that the proposed metric is effective and efficient to assess the quality of super-resolution images based on human perception.


Snapshot for paper

"Learning a No-Reference Quality Metric for Single-Image Super-Resolution"

Chao Ma, Chih-Yuan Yang, Xiaokang Yang, Ming-Hsuan Yang

Computer Vision and Image Understanding (CVIU), 2017

[Paper] [Supplement] [Conference Version in FCV2016]

[Code] [SR-Images] [Subject Scores]

3. Subject Studies on Perceptual Evaluation


Back Propagation (BP)








Quality scores of above SR images from human subjects, the proposed metric, rescaled PSNR, SSIM and IFC (0 for worst and 10 for best). Note that human subjects favor Dong11 over Glasner09 as the SR image is over-sharpened. However, the PSNR, SSIM and IFC metrics show opposite results as the image is misaligned to the reference image by 0.5 pixel. In contrast, the proposed metric matches visual perception well.

4. Experimental Results

A metric matches visual perception well when the distribution of scattered points is compact and spreads along the diagonal line.

5. References

  • M. Irani, S. Peleg, Improving resolution by image registration, CVGIP: Graphical Model and Image Processing 53 (3) (1991) 231–239.
  • Q. Shan, Z. Li, J. Jia, C. Tang, Fast image/video upsampling, ACM Trans.Graph. 27 (5) (2008) 153.
  • D. Glasner, S. Bagon, M. Irani, Super-resolution from a single image, in:ICCV, 2009.
  • J. Yang, J. Wright, T. S. Huang, Y. Ma, Image super-resolution via sparse representation, TIP 19 (11) (2010) 2861–2873.
  • W. Dong, L. Zhang, G. Shi, X. Wu, Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization, TIP 20 (7) (2011) 1838–1857.
  • C.-Y. Yang, M.-H. Yang, Fast direct super-resolution by simple functions, in: ICCV, 2013.
  • R. Timofte, V. D. Smet, L. J. V. Gool, Anchored neighborhood regression for fast example-based super-resolution, in: ICCV, 2013.
  • C. Dong, C. C. Loy, K. He, X. Tang, Learning a deep convolutional network for image super-resolution, in: ECCV, 2014, pp. 184–199.