Now that we know what causes the diffraction patterns and understand a little bit about the telescope, lets dive into the data and how they generate the images that we actually see. Images from the JWST are all publicly available and can be downloaded from here, the Barbara A. Mikulski Archive for Space Telescopes (MAST). These images, however, are not the same colorful images that we see from NASA. They are a combination of all the various sensors on the JWST that detect a range of things, most of which are not visible light. This means that in order to see the beautiful images we are using for this project, someone is working behind the scenes to create the image that we see.
All of the images that we use are available from NASA, here. These are the processed images that the image specialists at NASA have created. For our purposes, we always downloaded the images in their maximum resolution available. In many cases, throughout the website, we show common or repeated images for the purposes of demonstrating different techniques and approaches, however a wide variety of images were used across the project.
The JWST specializes in detecting infrared light. It has two regimes of infrared light that NASA calls near-infrared (NIR, wavelengths 0.6 to 5 microns) and mid-infrared (MIR, wavelength 5 to 28 microns) light. By using a combination of these different types of light, images are generated by the JWST and NASA post-processing. The Hubble Space Telescope specializes in visible and ultraviolet light. In cases where the Hubble Space Telescope also took pictures, NASA sometimes combines them to generate a new image with new colors.
This figure shows a picture from the Hubble (left) and JWST (right), each of the same galaxy. Note again the difference in type of light detected. The Hubble image uses ultraviolet and visible light, while the JWST uses near and mid-infrared light (just mid for this image). [9]
This image is a combination of the MIR sensor from the JWST and the visible and ultraviolet sensors from the Hubble. [10]
These images were all taken by the Webb of Arp 107, a galaxy pair. First is just an MIR image, then just NIR, and then both combined. [11]
NASA image specialists follow a long process in order to generate the colorful images that we see. They use a variety of different techniques in order to ensure that the new image doesn't create artifacts or any 'false' imagery that could misinform viewers. The process used is outlined here, but will also be summarized below. All the information in this section is directly summarized from [12].
1) They start by downloading the raw images and sensor data from the MAST archives.
a) This data contains the raw data from the 29 different filters for just the NIR sensor (there are 10 additional filters in the MIR sensors)
b) The data is stretched and compressed for each of these filters. Stretching allows the image specialists to better examine the image and identify the important features. Compression, on the other hand, turns all this data into a form that we can understand and our computers can display.
c) At the end of this first step, we have images from each filter that are now the same size, resolution, and better contrast than the originals.
2) This next step involves resolving various artifacts that can be seen on some images. A few of these are listed below and how the team approaches them:
a) Dark star cores - certain bright stars can be so bright that the Webb sensors aren't able to capture them. This means that we don't have any data for that star core. However, to resolve this for images seen by the public, the team samples nearby bright spots and uses these values to replace the center of the star.
b) Striations - striations are repeated lines that appear in the background of images. While this noise is often filtered by the camera, it can sometimes pass through. The team applies "an algorithm" to remove them.
c) Oversaturation - in the case of very bright stars, the sensors Webb uses can sometimes be oversaturated. This causes a problem as Webb takes multiple images to create the scenes we see. The oversaturation carries over to the next images and causes the star to reappear in a shifted location. The team works to remove these repeated stars.
d) Cosmic rays - cosmic rays can appear across the image in two different ways. They manifest as bright spots within the image, either bright pixels or short lines. Points occur when the rays hit straight on, while lines occur when rays hit at an angle. To remove these, the team examines multiple images of the same area to identify and remove them with average background pixels from nearby.
An example of each of these artifacts can be seen in the images below.
Dark Star Core - the center (star core) can be seen as black above.
Striations - thin, faint lines can be seen running vertically in the image.
Oversaturation - the white arrow points to the artifact created by the oversaturation from the bright star.
Cosmic Rays - bright pixels can be seen from cosmic rays (circled in white).
3) Next, an initial color palate is applied to the image. Because each filter is specially designed to capture certain wavelengths of light, we know the wavelength associated with each image (although for the Webb these are all infrared wavelengths). To turn this into something we can process, "color is applied chromatically: The shortest wavelengths are assigned blue, slightly longer wavelengths are assigned green, and the longest wavelengths are assigned red. If more than three images make up the final composite image, purple, teal, and orange may be assigned to additional filters that fall before or in between blue, green, and red."
4) This next step focuses more on subjectivity rather than the objective truth shown by the images. Techniques are used to help enhance the image mainly to viewer's eyes. The initial color palate is balanced to give equal amounts of each red, green, and blue across the image. Principles of photography are used to bring out dull areas and help display the depth of images. Images are also cropped and oriented; notably, they choose to almost always orient the images such that the diffraction pattern appears in the same cardinal directions. The goal of this step is to create images that remain scientifically accurate, but are also interesting and engaging.
5) This step focuses on an in-depth scientific review of the images with the teams that conducted the relevant research to ensure that the images do not contain erroneous information.
All of these steps are very time-consuming, but lead to some amazing images. Note that the website linked above (and here) also has a section that covers how the MIR and NIR images are combined to generate images that use both sensors, but this was not summarized here.