Results

Below are the results corresponding to each algorithm. M is the type of sampling mask used, Y represents the corrupted image, and X is the impainted image. PSNR values are displayed for each image to clearly distinguish the quality of the result. X_true represents the ground truth image. 

Based on the results above, we can compare the performance of each algorithm in the table below:

It can be interpreted that in general, Mask B produced better quality of recovered image compared to Mask A. This could be because Mask A was a relatively big missing patch to fill in compared to Mask B which consisted of small noisy patches so it is more difficult to recover the missing information in Mask A. Furthermore, LRMC performed better than dictionary and transformation methods. One possible reason might be that the formulations for the regularizers for sparse dictionary and transformation are not entirely accurate. This is something that I wish to investigate more in the future.

Below are the results of machine learning methods: GAN and DDPM. For both methods, we run the provided code on GitHub without further finetuning the neural network parameters. For the DDPM, since the reverse sampling process is stochastic, we show 8 different images inpainted by the DDPM with 8 different noise seeds.

SInce the image inpainting problem itself is ambiguous and NP-hard, it is hard to measure the algorithm's performance with a single image output. To that extent, the DDPM algorithm has the advantage of generating diverse output images over other algorithms.