One approach we took towards eliminating the diffraction patterns was edge detection. Edge detection uses various edge detection algorithms and filters to identify changes in intensity and color within an image. With the edges found, we can then apply additional filters or other techniques to try and eliminate the edges that we are seeing from the diffraction patterns.
A number of different edge detection algorithms were used. These algorithms were Prewitt, Sobel, Scharr, and Canny. In addition to these, an attempt was made at generating an edge detection filter to look for a specific angle of edge within the image, described in more detail later. Each of these algorithms is described in more detail below.
The Prewitt and Sobel edge detection algorithm uses a matrices, Gp and Gs respectively, that they convolve with our image in order to generate the edges found in the image. Each has a matrix for detecting vertical edges, Gpx and Gsx, and one for detecting horizontal edges, Gpy and Gsy. Note that the matrices are just the transposes of each other (Gpx = Transpose(Gpy)). By adding the x and y matrices together, we can detect both the vertical and horizontal edges within an image. The key difference between the Prewitt and Sobel algorithms is that the coefficients of the Sobel algorithm are somewhat adjustable to suit specific applications, but there are some properties that they still must meet so we don't have complete control over their values. The Prewitt and Sobel operators (matrices) are given below, as well as an example of them in action.
In addition to convolving the edge detection operator with our image, one other parameter was used. This parameter was a threshold intensity post convolution. So the image and operator were convolved, and then we take only the pixels where the intensity lies above some threshold value. This allows us to adjust the sensitivity of the algorithms while keeping the operators (G) the same.
The Prewitt Operator. Adapted from [15].
The Sobel Operator. Adapted from [16].
The Carina Nebular Original Image [17]:
Prewitt Edge Detection:
Sobel Edge Detection:
Note that the Prewitt and Sobel both returned very similar plots for the edges that it detected. They both do a fairly good job detecting the diffraction pattern that we are looking for. However, they do have some problems detecting the diagonal lines as well as the strictly horizontal and vertical lines. For this reason, we chose to further explore some additional edge detection methods.
Scharr Edge Detection functions very similarly to Prewitt and Sobel. It uses a matrix, Gh, that when convolved with our image returns an image of the edges that it detects within the graph. Just like Prewitt, it has two matrices, one for vertical edge detection (Ghx), and one for horizontal edges (Ghy); these matrices are again just the transpose of each other. Like the Prewitt and Sobel, we also used a post convolution threshold value for the Scharr edge detection.
The Scharr Operator. Adapated from [18].
Scharr Edge Detection:
The Scharr edge detection performs better than the Prewitt and Sobel across the image. It does a much better job detecting the full extent of the diffraction pattern, particularly with the diagonal lines that we can see. Of note however, is that it also detects all of the much smaller changes in gradient across the rest of the image. This makes things difficult when we go to then attempt to get rid of the diffraction pattern as we lose a lot of the detail previously seen in the rest of the image.
Another edge detection method we explored was the Canny algorithm. The Canny algorithm is a much more complicated algorithm than the Prewitt, Sobel, or Scharr, and as such we won't dive into the details here, but a brief overview will be given. The Canny algorithm starts with applying a noise reduction, followed by applying an edge detection (like Sobel), then we perform an edge thinning process to reduce the width of the edges, and then the algorithm selects what it calls 'strong' edges so that we only see the most relevant edges.
Canny Edge Detection:
The Canny Algorithm performs well, like the Scharr, but has one major difference in its performance. Because the Canny algorithm purposely thins edges to only give the most prominent edges, it only detects the outer edge of the diffraction patterns around stars. This makes getting rid of these diffraction patterns a little more difficult as we don't have the whole line selected, like the Scharr algorithm was able to do.
One final edge detection algorithm that we explored was an angled filter that could be convolved with the image, similar to the Prewitt and Sobel operators. Because the diffraction patterns in the images are caused by the structure of the telescope, the angles at which the diffraction lines appear are known and constant across the vast majority of the images (there are a few that have been rotated, making these angles unknown). That means that we can theoretically design an operator that looks for these edges of know angle when convolved with the image.
To do this, the first step was to identify the angles. By looking at NASA's documentation about the JWST, we determined that the diagonal edges of the diffraction patterns occur at 60 degrees to the vertical; we specifically looked at the diagonal because we have algorithms to detect horizontal and vertical edges already, and the Prewitt and Sobel struggled with the diagonal edges previously.
The next step is to then design the filter. When we think of a 60 degree right triangle, the two shorter side lengths are sqrt(3)/2 and 1/2. This makes things difficult because we cannot make a matrix that has a dimension that is not rational. To help solve this we can use a rational approximation of sqrt(3), 71/41. This allows us to now create a matrix that has the approximate side lengths of a 60 degree right triangle. Now we place -1's on one side, and 1's on the other to create the gradient difference across that 60 degree line that we are looking for. Now the next step is to convolve this operator with our image.
These are the resulting edge detections based on a few different rational approximations of sqrt(3). The values used were 17/10, 71/41, and 589/340. Interestingly, the results seem to be more related to the size of the filter than how good our approximation of sqrt(3) is.
In hindsight, this makes sense. The worse approximations of sqrt(3) tend to be rational numbers with smaller numerators and denominators, like 17/10. The smaller this denominator is, the smaller the resulting operator will be. A smaller operator gives better results because it's attempting to find edges over a smaller area within the image.
Even within the best filter, where sqrt(3) ~= 17/10, the detection doesn't show us as many of the diffraction patterns as we'd like. They are present, but difficult to identify in such a crowded image.
With various edge detection algorithms implemented, the next step is to see how we can use them to remove the diffraction patterns that they observe. Thus far, we've only tried two different approaches to do this, but we plan to attempt more in the near future. The two methods that were attempted were zeroing and local averaging.
One method to remove the diffraction patterns is simply to zero out any pixels where an edge was detected. This method does a good job of removing the patterns but replaces the pattern with the same pattern, just black. This isn't truly removing it, but it is an interesting experiment nonetheless. Another downside to this method is that it eliminates alot of the detail that we can see in other places in the image. This is particularly true for groups of small stars, where we see lots of small edges that get zero'd out.
Another method that the team tried was local averaging of edge pixels. This algorithm takes every pixel where we identified an edge, and averages a square (of some given size) around it and sets the value of the edge pixel to that averaged value.
A few different plots are shown for the Scharr edge detection algorithm with variously sized local averaging filters. The averaging does a pretty good job of getting rid of the diffraction patterns for the most part, but the downside is that the majority of the rest of the image is then blurred.
As was seen in the local averaging, even averaging over small distances greatly blurs the resulting image. This takes away a lot of the detail that we could previously see within the image, although it does also succeed in eliminating the diffraction patterns in many cases. To help resolve the subsequent blurring, one approach is to identify regions where we can see the diffraction pattern, and then perform local averaging only on the edges within that region. An example of one of these attempts is given below. A different image was used because the stars with diffraction patterns are much more localized than those in the image used thus far.
Arp 142 - the Penguin guarding the egg [19].
Arp 142 - with patch localized averaging around the 5 stars with diffraction patterns.
The patch localized averaging does a much better job at preserving the overall image, as can be seen, than the localized averaging applied to the entire image. Although, some blurring and loss of detail can be seen in the blue 'cloud' behind the pair of diffraction pattern stars at the bottom of the image. At the same time, however, we can still see some problems in that the averaging doesn't do a perfect job of removing the diffraction pattern. While the pattern is certainly much fainter, it remains nonetheless. One explanation for this is that the averaging function is overwhelmed by the brightness of the center of the star. Because so much of the rest of the image is very dark, the center of the star has a very large affect on the new value of points near the center. This means that instead of fading into the background, they just take on a less intense version of their former color. This was a problem also noticed previously with the localized averaging, but it was mitigated by the fact that the background wasn't quite as dark, so the lessened diffraction pattern better blended into it.
As was briefly mentioned above for the patch local averaging, one explanation for the fact that the diffraction pattern still remains, albeit much fainter, is the averaging function being overwhelmed by the brightness of the center of the star. To try and reduce the impact of the intensity of the center, and other very intense pixels, a intensity-threshold was also implemented. This function is very similar to the patch local averaging, but adds that when averaging, it will additionally only average over pixels whose intensity falls below a certain threshold, normalized by the brightest pixel in the image.
To do this, the user gives some intensity threshold as a decimal (say 0.8), and then the function will only average over pixels whose intensity is below 80% of the largest intensity pixel in each given patch. The intensity was defined as 1/3 * (R + G + B), but there are varying methods for calculating the intensity of an RGB pixel, others of which could be used and results may slightly differ. Below are a few attempts with different intensity threshold values.
The intensity thresholding did not have quite the effect that we were hoping for. The only noticeable difference is slight changes in the brightness of the image, near the centers of stars within the patches. No noticeable improvements were made in eliminating the diffraction pattern.
At this point in the project, we believe that we have largely reached the limits of what averaging and other relatively simple edge detection based methods can do while maintaining the rest of the image. Overall, they have significantly reduced the prominence of the pattern through a variety of different approaches, while also being able to maintain much of the image. They cannot, however, entirely eliminate the pattern from images, and such have not quite succeeded in our goal.