Rigid and Non-rigid Motion Artifact Reduction in X-ray CT

Rigid and Non-rigid Motion Artifact Reduction in X-ray CT using Attention Module

People

Youngjun Ko, Seunghyuk Moon, Jongduk Baek, Hyunjung Shim

Abstract

Motion artifacts are a major factor that can degrade the diagnostic performance of computed tomography (CT) images. In particular, the motion artifacts become considerably more severe when an imaging system requires a long scan time such as in dental CT or cone-beam CT (CBCT) applications, where patients generate rigid and non-rigid motions. To address this problem, we proposed a new real-time technique for motion artifacts reduction that utilizes a deep residual network with an attention module. Our attention module was designed to increase the model capacity by amplifying or attenuating the residual features according to their importance. We trained and evaluated the network by creating four benchmark datasets with rigid motions or with both rigid and non-rigid motions under a step-and-shoot fan-beam CT (FBCT) or a CBCT. Each dataset provided a set of motion-corrupted CT images and their ground-truth CT image pairs.

The strong modeling power of the proposed network model allowed us to successfully handle motion artifacts from the two CT systems under various motion scenarios in real-time. As a result, the proposed model demonstrated clear performance benefits. In addition, we compared our model with Wasserstein generative adversarial network (WGAN)-based models and a deep residual network (DRN)-based model, which are one of the most powerful techniques for CT denoising and natural RGB image deblurring, respectively. Based on the extensive analysis and comparisons using four benchmark datasets, we confirmed that our model outperformed the aforementioned competitors. Our benchmark datasets and implementation code are available at https://github.com/youngjun-ko/ct_mar_attention.

Overview and Contributions

Architecture of the proposed network. It takes a single motion-corrupted CT image as input, process it by N AttBlocks, and outputs a motion-reduced CT image. The structure of AttBlock is illustrated in the enlarged box where the residual structure is marked in blue and the self-attention module is marked in bold red. Note that AttBlock calculates the attention module from the residual part and multiplies them together before the identity mapping.

  1. Our main contribution is to introduce and to design the effective attention module, which was implemented by global average pooling, for reducing the CT motion artifacts. To the best of our knowledge, we are the first to introduce an attention module for compensating motion-distorted CT images. We highlight that our attention module, despite its simplicity, improves the model capacity without increasing the network depth and shows the significant performance gains on various scenarios, including both rigid and non-rigid motions.

  2. To verify the performance of the proposed network, we generated and publicly released our motion-paired benchmark datasets, representing simple to complex motion scenarios as well as patient anatomy. We expect these datasets can provide the basis for further deep learning for motion artifact reduction.

Experimental Datasets

In the paper, we generated four benchmark datasets as follows:

  • FBCT teeth dataset with 2-DoF rigid motions

  • CQ500 dataset with 2-DoF rigid motions

  • CBCT teeth dataset with 6-DoF rigid motions

  • Chest dataset with 6-DoF rigid and non-rigid motions

CT geometries and object transformation from various motion scenarios: (a) 2-DoF rigid motion, (b) 6-DoF rigid motion, and (c) 6-DoF non-rigid motion. An equally spaced line or planar detector is used in the FBCT and CBCT geometry respectively.

An example of motion trajectory for each axis of 6-DoF rigid motion: In the graphs, red, green and blue lines indicate motion trajectories along x-, y-, and z-axes, respectively, for the (a) translation (mm) and (b) rotation (degree).

Experimental Results

Comparison of different network outputs in the: (1) FBCT teeth, and (2) CQ500 dataset.

Comparison of different network outputs in the: (1) CBCT teeth, and (2) Chest dataset.

* Note that the displaying window settings are indicated by the Hounsfield Unit (HU) on the right side of each figure.

Quantitative results of motion artifacts reduction from various networks.

Publication

Rigid and Non-rigid Motion Artifact Reduction in X-ray CT using Attention Module

Youngjun Ko, Seunghyuk Moon, Jongduk Baek, and Hyunjung Shim, Medical Image Analysis, 2020

Links

[pdf][code]