Microwave (MW) sounders (like ATMS - Advanced Technology Microwave Sounder) have an ability to penetrate thick clouds and thus can “see” the inner structures of severe weather systems. Therefore, their images are valuable for users to evaluate storm’s internal processes and its strength. However, compared to visible and infrared sensors, the measurements acquired by MW sounding instruments often have relatively poor spatial resolution and thus result in blurry images with low quality (see Figure below). So it is needed in a practical sense to take a low resolution MW image and produce an estimate of a corresponding high‑resolution image. One of the most common techniques for upscaling an image is interpolation. Although simple to implement, this method leaves much to be desired in terms of visual quality. Recently, a deep learning method for single image super-resolution (SR) [Dong et al. 2014] has been successfully applied in computer vision field. The idea behind of this method is to exploit internal similarities of low‑resolution images and their high‑resolution counterparts in training data sets, effectively learning a mapping between them. As a preliminary study, a super-Resolution Convolutional Neural Network (SRCNN) has been experimentally applied into ATMS low resolution images in order to enhance its image quality. We use the high spatial resolution Advanced Microwave Scanning Radiometer (AMSR)-2 (3x5 km) data to convolve with ATMS antenna pattern to generate low (2.2° pixel size) and high resolution (1.1° pixel size) ATMS training data sets. These data are then used to train the SRCNN models, which include three parts 1) patch extraction and representation, 2) non-linear mapping, and 3) reconstruction. As demonstrated in Figure 1, we applied this model to ATMS images at channel 16 for Hurricane Erick for SNPP ATMS images at 1120 UTC on July 30 2019. The enhanced image with 2x resolution improvements more clearly discloses internal structures of hurricane (click the button below to see difference). While the preliminary results are encouraging, there are still some remained questions for future work, such as signal-to-noise ratio change, model improvements (e.g., 4X improvements), and comparison with Backus-Gilbert (B-G) method.
Efforts are being made to use the image-to-image translation model to generate GOES-like continuous synthetic microwave images from the infrared (IR) images. A convolutional neural network (CNN) based image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image. For our application, the goal is to convert the GOES ABI infrared images from different channels (i.e., water vapor, ozone, atmospheric window, and CO2 sounding channels) into the corresponding MW images (89 or 35 GHz). Theoretically, the radiation from IR spectral domain has different features compared to the MW observations. Indeed, the IR images reflect radiation emission from cloud tops, while MW 89-GHz images primary signature is lowered brightness temperatures (BTs) caused by ice scattering and cloud and rain droplets within deep convection and precipitating anvil clouds. However, when deep convective cores exist below anvil clouds in tropical cyclones (TCs, i.e. hurricane or typhoon), the high-spatial resolution IR image regularly shows spatial roughness at cloud tops on the IR images. The CNN-based image-to-image has an ability to utilize spatial context for this application and offers revolutionary improvement to detect the precipitation features within TCs.
Specifically, we will use the mature Pix2Pix model to achieve this goal. The Pix2Pix model is a Generative Adversarial Network (GAN) model designed for general purpose image-to-image translation, which was presented by Isola et al. (2017). The GAN architecture is comprised of a generator model for outputting new plausible synthetic images, and a discriminator model that classifies images as real (from the dataset) or fake (generated). The two models are updated simultaneously in in an adversarial process, where the generator seeks to better fool the discriminator and the discriminator seeks to better identify the counterfeit images.
To train the Pix2pix model, we generated two training datasets. The first dataset is the simulated satellite images from NCEP Hurricane Weather Research and Forecasting Model (HWRF) model forecasting fields for TC cases, which are simulated GOES IR images at several IR channels and AMSR-2 images at 91.65 GHz or 35.6 GHz at the model grid space with zero scan angles using radiative transform model (i.e., community radiative transfer model). The second training dataset is real observations, which are simultaneously collocated AMSR-2 and GOES-16 ABI IR and GLM images for previous TC cases occurred at the North Atlantic and Pacific. The figure below shows an example of using the Pix2pix2 mode to generate MW image for for an independent TC case. The model can successfully transform GOES-R IR radiance images into the corresponding MW images and reveal overall structures, though the model still has a room to improve for predict the internal fine structures within TCs.
Efforts are being made to use the AI-based morphing technique to transform static MW images into an animation, which can monitor the development of rapidly evolving TCs. The Climate Prediction Center morphing method (CMORPH) is the similar method that uses motion vectors derived from half-hourly interval geostationary satellite IR imagery to propagate the relatively high-quality precipitation estimates derived from passive microwave data (Joyce et al. 2004). But our work is more focused on high spatial and temporal resolution MW images and based on the advanced AI and ML approach. Specifically, given the fast frequency of ABI images (full disk scan every 15-minutes and Continental US scan every 5-minutes), the consecutive high-resolution ABI images can put together and compute the optical flow vectors by tracking the movement of clouds or water vapor. Optical flow is defined as the apparent motion of individual pixels on the image plane. It often serves as a good approximation of the true physical motion (e.g., cloud or hurricane) projected onto the image plane.
Shown in Figure below is an example of the computed optical flow from continuous GOES images for Hurricane Dorian on September 1 2019, which clearly indicates the cloud movement around TCs. The left panel in Figure 4 is AMSR-2 image overlapped with GOES images when GCOM-W1 passed over Hurricane Dorian at the same time. We are trying to to propagate the TC pixels in the MW images using derived optical flow vectors. The key of this method is to transform the static MW images into an animation, which thus can help forecasters to track the rain band moving within TCs. More importantly, we can use both forward and backward propagation to form high spatial temporal resolution MW images at relative long period. On the other hand, while one polar orbiting satellite only pass the specific location two times in the mid-latitude and tropical regions, ATMS on NOAA-20 and SNPP combined with AMSR-2 on GCOM-W1 can from a constellation to make frequent observations for TCs. In addition, the shape and intensity of the MW pixels will be morphed during the time between microwave sensor scans by using the AI-based morphing technique.