An image inpainting method using pLSA-based search space estimation
Mrinmoy Ghorai Bhabatosh Chanda
Indian Statistical Institute, Kolkata
Indian Statistical Institute, Kolkata
In this work, we present a novel exemplar-based image inpainting technique based on the local context measure of the target patch. Three main steps of the proposed method are determination of patch priority, the search space estimation for the candidate patches and the patch completion to fill in the unknown pixels of the target patch. In patch priority, we emphasize on the structure by the spatial relationship of neighbourhood similar patches and kernel regression based local image structure. We find the search space, sub-regions of the entire source region similar to the region surrounding the target patch, to find the candidate patches. The said search space is estimated using probabilistic latent semantic analysis (pLSA). Last, we infer the unknown pixels of the target patch using pLSA-based context and histogram similarity measure between the target patch and the candidate patches. Experimental results are found to be good compared to the competitive methods and may be used for digital restoration of images of defective or damaged artifacts.
The test visual document is marked by blue circle and the similar visual documents are marked by yellow circles.
Green, blue and yellow circles are the location of the target patch, candidate patches using traditional search and blocks consisting search space using pLSA respectively.
The first row shows five original images. In the remaining six rows, first to the sixth row show the images with target region, the results of Criminisi’s exemplar-based algorithm [1], Komodakis’s priority BP-based algorithm [3], Zongben’s sparsity-based approach [6], Darabi’s image melding technique [2] and the proposed method.
Target image PatchMatch [7] Image Melding [2] Super-resolution [5] Proposed method
"An image inpainting method using pLSA-based search space estimation" Mrinmoy Ghorai and Bhabatosh Chanda Machine Vision and Applications, 26(1): 69-87 (2015)