ICIP16
Anchored Neighborhood Regression based Single Image Super-Resolution from Self-Examples
Yapeng Tian, Fei Zhou, Wenming Yang*, Xuesen Shang and Qingmin Liao
Abstract:
In this paper, we present a novel self-learning single image super-resolution (SR) method, which restores a highresolution (HR) image from self-examples extracted from the low-resolution (LR) input image itself without relying on extra external training images. In the proposed method, we directly use sampled image patches as the anchor points, and then learn multiple linear mapping functions based on anchored neighborhood regression to transform LR space into HR space. Moreover, we utilize the flipped and rotated versions of the self-examples to expand the internal patch space. Experimental comparison on standard benchmarks with state-of-the-art methods validates the effectiveness of the proposed approach.
Results
Fig 1. Visual comparison of the restored HR images (close-ups) scaled by a factor of 2.
Download
- Super-resolved images in Set5 and Set14
- Source Code
- Note that: the results might have a bit of differences with the published results due to the randomly anchor points selection scheme.
- Baidu Cloud
- Google Drive
- Github
References
[8] J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5197–5206.
[12] R. Timofte, V. De Smet, and L. Van Gool, “A+: Adjusted anchored neighborhood regression for fast superresolution,” in Asian Conference on Computer Vision (ACCV), 2014, pp. 111–126.
[13] C. Dong, C. Chen, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” in European Conference on Computer Vision (ECCV), 2014, pp. 184–199.