PSF Deconvolution custom operators
We experimented with many different custom PSF's, combining attributes of many different PSF's, such as the edge detection capability of the Prewitt PSF in addition to the shape of the diffraction pattern. Overall, our custom PSF was the best definition of the diffraction pattern, thus was the best at restoring the image.
Edge Detection with patch, intensity-thresholded, non-edge local averaging
This was the overall best performance we achieved using edge detection. We found that by applying local averaging to patches, we minimized image blurriness. Additionally, by only averaging over pixels below a certain intensity threshold (attempting to account for the center of the star), we removed more of the diffraction pattern.
Custom Edge Detection with a rational approximation of sqrt(3).
We found using a closer rational approximation of sqrt(3), the worse the custom edge dector performed. We rationalized this was due to a larger search area, which resulted in poor performance.
PSF Deconvolution using MATLAB predefined PSF's
Overall, none of the MATLAB predefined PSF's did a good job at removing the diffraction pattern while maintaining image quality. The best result with the predefined PSF's was using a Prewitt PSF with Weiner deconvolution. The diffraction pattern was mostly removed, but blur was added and the center of the celestial object was also removed.
Dictionary Learning
Although it has great potential, we weren't able to successfully process images to remove the artifact.
Currently, too much distortion of the rest of the image takes place, with not enough processing out of the artifact occurs.
One thing that the team didn't quite get the time to address was automating the diffraction pattern identification
This process could be very useful in applying the patch local averaging that the team did with edge detection. At the moment, the patches are identified manually and then fed into MATLAB so it knows where to average. Alternatively, code could be written to automatically identify these diffraction patterns and save quite a bit of time in doing so.
Automated diffraction pattern identification could also be of use with deconvolution. Instead of performing deconvolution on the entire image, with automatic diffraction pattern identification, we would perform deconvolution on only the identified sections. Performing convolution on only sections of the image could reduce the amount of artifacts, distortion, and blur that occurs when deconvolution is done to the entire image. Additionally, we could tune the parameters more aggressively without having to worry about the negative consequences of distorting the entire image.
With more time and future work, we would also like to be able to further refine the dictionary learning algorithm. Combined with some way of isolating the artifacts, it may provide a more significant change if there's more harsh adjustment only in the parts we need it. This could be done in conjunction with edge detection, using this process to choose patches to edit specifically where the artifact occurs. Some specific areas to explore next:
Edge detection could be used for pattern location, that specific piece of the image would be processed.
Adjust step size and gradient calculation in fista(), could speed up early stages and make end result more refined.
Find a better starting dictionary or better training images (Which had a larger than expected effect on the final result)
A variety of image processing techniques
Edge detection
In class, we briefly discussed edge detection and its uses and implementation. We expanded upon this by using the algorithms used in class and those that we found ourselves to be particularyl useful or helpful for our application. The 4 edge detection methods that we used were Prewitt, Sobel, Scharr, and Canny.
We also developed our own custom edge detection operators for detecting edges at a 60 degree angle using rational approximations of sqrt(3)/2. While this approach didn't yield quite what we wanted, it proved interesting and could be a starting point for some future work.
Filtering and Deconvolution
Overall, we learned that image deconvolution can be effective, depending on the type and severity of the image distortion. In class, we talked in length about convolution and transfer functions. We used the relationship between multiplication and convolution in the time and frequency domains in order to solve complex equations. We learned that the same theory was applied in image deconvolution, where we wanted to solve for some original image, given some captured image an estimation of the distortion. We learned that these relationships are directly translatable to real world signals, such as the images we used! Although we are not the ones solving these complex transforms and inverse transforms, we understand the theory thanks to EECS 351.
Frequency Domain Analysis
Fast Fourier Transform
Generating a visual representation of the spatial frequency spectrum of an image allowed us to see the diffraction pattern and provided a path to potentially suppress frequencies associated with the diffraction artifact.
Filtering and deconvolution also applies to the frequency domain analysis because the math was all done in the frequency domain and the methods themselves are based on using the frequency domain to manipulate signals.
Linear Systems and Properties. We used a few different linear systems and techniques to help process our images.
Edge detection averaging
Some of our initial averaging functions simply averaged around an edge pixel to define a new value for that pixel. This is a simple moving average function that we apply to pixels where we have edges.
A brief discussion of the system properties of the JWST in discussing its PSF
We briefly discuss the properties of the PSF function for the JWST, and hence the JWST optical system. The PSF can be nonlinear, making estimating it difficult. Additionally, it can vary at different focal and spacial points; this most closely translates to it being shift-varying.
Dictionary Learning
Dictionary learning, in a nutshell, uses a training image in order to train a dictionary matrix over many iterations. It then uses this dictionary to change patches of a new testing image where we have identified artifacts that we wish to move.
Dictionary learning is difficult because there are many ways to optimize, and many parameters that could be wrong or just not optimal
The initial dictionary had a substantial effect on the final image
The value of the fista() parameter, lambda, also changed the pattern of convergence
The step size of fista() was not altered significantly, but in future work, it could be a great way to get different results.
This link was a main source for our initial understanding of dictionary learning [23].
To update the sparse coefficient matrix, FISTA (Fast Iterative Shrinkage-Thresholding Algorithm) was used
We used this algorithm to solve the minimization problem min (D, Z) norm(X – D・Z) + λ || Z ||1, 1
We experimented with adjusting the parameters of the algorithm and how it affected convergence and the results of image processing
After identifying the sparse coefficient matrix and the trained dictionary, the dictionary was used to remove the artifact from the testing image
The test image is split up into a matrix of patches, which is then used to calculate a sparse matrix with one iteration of fista().
The sparse matrix and dictionary are multiplied together, reconstructed_patches = D・Z, and those patches are reassembled into the final image.