Machine Learning for
CT Reconstruction/
MAR/Segmentation

  • Deep Learning enabled Model Based Iterative CT Reconstruction (ICCASP 2018, GlobalSIP 2018, Patent Application US15/730543)

Fig. 1. Examples of CT Reconstruction. Left: FBP, Middle: Deep Learning MBIR, Right: Standard MBIR. Cardiac ROI is zoomed in the red rectangle. Deep learning MBIR significantly reduces the noise and enhances the resolution compared with FBP while generating very close image to the fully converged ground-truth MBIR image with short recon time.

Model-Based Iterative Reconstruction (MBIR) has gained increasing attention for CT image reconstruction due to its suppressed noise and superior resolution compared with the filtered-back projection (FBP) algorithm. However, the high computational time needed for MBIR is a major obstacle for its application to clinical practice. Among many modeling such as the forward projection and electronic noise models, the image prior model plays a significant role in the quality of MBIR. Typical priors such as Markov Random Field (MRF) are not sufficient to differentiate the noise-induced fluctuations from the real structures in the image requiring many iterations for convergence. To speed up MBIR, we generate the image prior model from a large dataset through deep learning. Specifically, we train the deep neural networks via residual learning to remove texture noise in an unseen testing image and incorporate them into MBIR as prior using alternating direction method of multipliers (ADMM). Experimental results on real cardiac CT scan show that our deep learning MBIR significantly improves the speed of MBIR (~3s/scan) matching the image quality of fully converged standard MBIR (>5min/scan) given a noisy FBP initial condition as displayed in Fig. 1.

  • CT Metal Artifact Reduction/Segmentation using Dictionary Learning (ICIP 2015, IEEE TMI 2019, ICIP Best Paper Runner-Up Award)

Fig. 2. MAR/Segmentation Results. Left: Raw CT, Middle: Standard MAR, Right: Dictionary Learning MAR. Bottom row shows corresponding Potts model segmentation results. Standard MAR with MRF prior can improve the segmentation but introduce secondary artifact around tissue boundary. Our dictionary learning MAR achieves the significant improvement in both restored image and segmentation by preserving tissue boundary after MAR.

CT images often contain artifacts such as X-ray beam hardening and scatter due to high attenuation objects (e.g. metal). We propose a novel framework to reduce the metal artifacts in CT images using dictionary learning. Dictionary learning finds a sparse representation in a training dataset with over-complete basis functions. The learned dictionary can be used to restore noisy images as linear combinations of a set of dictionary atoms. We develop a joint optimization framework over the restored image and the segmentation label to iteratively apply metal artifact reduction (MAR) and segmentation. Results on an XCAT phantom in Fig. 2 show that our dictionary-learning-based MAR produces a significant reduction in metal artifacts compared with a MAR with Markov Random Field prior, thereby improving segmentation accuracy.

  • Deep Learning Segmentation for Pediatric CT Organ Dose Estimation (ISBI 2020, SPIE MI 2021, NIH U01EB023822, Code)

Fig. 3. Examples of CT Organ Segmentation of 5-year-old patients. Top: Uterus, Bottom: Prostate. Despite having fewer training samples available for patients of younger age, our proposed CFG-SegNet is capable of generating better segmentation masks for all patients than existing U-Net based methods.

CT radiation dose is a growing public health concern. Stochastic cancer risks due to CT radiation, especially in pediatric patients, have prompted state regulations for mandatory dose reporting. However, the current metric of CT dose index is widely considered inadequate since it represents the average dose to a uniform cylindrical phantom and not the dose to the patient’s organs. For routine, rapid, and patient-specific CT organ dose estimation, automatic segmentation is required. Deep learning shows promising results in CT organ segmentation, but one remaining challenge is the lack of training data particularly for pediatric CT scans. To tackle this challenge, we propose a novel deep learning segmentation network with a built-in auxiliary classifier generative adversarial network that age-conditionally generates discriminative features during training. The proposed CFG-SegNet (conditional feature generation segmentation network) enhances reproductive organ (e.g., prostate/uterus) segmentation in pediatric CT with few available training images compared with the state-of-the-art U-Net segmentation. (See Fig. 3)