Highlighted Research
For an updated list of publications, please visit my Google Scholar page.
Highlighted Research
For an updated list of publications, please visit my Google Scholar page.
Simulating Clinical Features on Chest Radiographs using a Style-based Generative Adversarial Autoencoder for Exploratory Image Analysis and CNN Explainability: A Feasibility Study
(in review)
As a follow-up to our FIGAN paper, we assess the feasibility of a style-based generative adversarial autoencoder to simulate clinical and convolutional neural network (CNN) features on chest radiographs for exploratory image analysis and CNN explainability. Specifically, we propose Semantic Exploration and Explainability using a Style-based Generative Adversarial Autoencoder Network (SEE-GAAN), an explainability framework that uses latent space manipulation to generate synthetic image sequences that semantically visualize how clinical and CNN features manifest within medical images. We show that SEE-GAAN sequences can capture changes in anatomical and pathological morphology and density associated with clinical features and CNN predictions. Visual analysis of these seqeuences can facilitate exploratory medical image analysis and improve CNN explainability over commonly used attribution methods.
Feasibility of Deep Learning COPD Diagnosis and Staging with Combined CT and Clinical Data
(in revision)
In this study, we assess the potential of a convolutional neural network (CNN) to stage COPD severity on single-phase CT, relative to inspiratory-expiratory CT. We retrospectively obtained 8,893 inspiratory and expiratory lung CT series and spirometry measurements from the COPDGene Phase I cohort. CNNs were trained to predict spirometry measurements (FEV1, FEV1 percent predicted, FEV1/FVC) using clinical data and either single-phase or multi-phase CT as input. Spirometry predictions were then used to predict Global Initiative for Obstructive Lung Disease (GOLD) stage. GOLD stage accuracies for single stage, within-one stage, and diagnosis ranged 65.2-85.8% for single-phase CT and 67.6-88.0% for multi-phase CT. Results suggest CNN-based COPD diagnosis and severity staging is feasible using routine non-contrast inspiratory CT and has comparable diagnostic and staging accuracy with inspiratory-expiratory CT.
In this study, we propose a semiautomated pipeline and user interface (LiVaS) for rapid segmentation and labeling of MRI liver vasculature and evaluate its time efficiency and accuracy against manual reference standard. We show that our semi-automated pipeline was robust to different MRI vendors in producing segmentation and labeling of liver vasculature in agreement with expert manual annotations, with significantly higher time efficiency. LiVaS could facilitate the creation of large, annotated datasets for training and validation of neural networks for automated MRI liver vascularity segmentation.
Convolutional neural networks (CNNs) are increasingly being explored and used for a variety of classification tasks in medical imaging, but current methods for post hoc explainability are limited. Most commonly used methods highlight portions of the input image that contribute to classification. While this provides a form of spatial localization relevant for focal disease processes, it may not be sufficient for co-localized or diffuse disease processes such as pulmonary edema or fibrosis. For the latter, new methods are required to isolate diffuse texture features employed by the CNN where localization alone is ambiguous. We therefore propose a novel strategy for eliciting explainability, called Feature Interpretation using Generative Adversarial Networks (FIGAN), which provides visualization of features used by a CNN for classification or regression. FIGAN uses a conditional generative adversarial network to synthesize images that span the range of a CNN’s principal embedded features. We apply FIGAN to two previously developed CNNs and show that the resulting feature interpretations can clarify ambiguities within attention areas highlighted by existing explainability methods. In addition, we perform a series of experiments to study the effect of auxiliary segmentations, training sample size, and image resolution on FIGAN’s ability to provide consistent and interpretable synthetic images.
Diseases affecting the small airways can manifest as pulmonary air trapping, which can go undetected on routine inspiratory chest CT. In many cases, quantitative measurements on a dedicated inspiratory/expiratory lung CT protocol are necessary for air trapping assessment. Recent methods quantify air trapping by registering inspiratory and expiratory phase images using lung deformable registration, but these algorithms often require minutes to hours to perform. We propose a CNN-based algorithm to perform deformable lung registration, reducing inference runtime from as much as ~15 minutes to ~2.25 seconds on CPU and ~1 second on GPU, without loss of accuracy.
Proposal of a staging system for chronic obstructive pulmonary disease severity. The benefits of this study are multifold. We developed an algorithm that automatically determines the amount of emphysema and air trapping in the lungs on CT images using deep learning-based lung segmentation and deformable registration. We then defined a CT-based staging system to determine COPD severity and showed the proposed staging system is prognosticative of disease progression. Finally, we created disease maps visualizing the spatial distribution of disease. Podcast interview for the paper coming soon...
CNN color-coded difference maps accurately display longitudinal changes in liver MRI-PDFF
Assessment of longitudinal, spatial changes in liver fat requires the use of manually place regions-of-interest on MRI proton density fat fraction images, which is time consuming and laborious. We applied our CNN-based liver registration to create PDFF difference maps to facilitate fast, visual assessment of liver fat. In a reader study, we found visual assessment using our difference maps strongly agreed with manual estimates performed by expert readers.
Are neural networks always better than traditional machine learning algorithms? We attempt to explore this question through the task of contrast uptake adequacy assessment of liver hepatobiliary images.
Hepatobiliary phase (HBP) imaging using intracellular contrast facilitates the detection of liver lesions by radiologists, since lesions, along with the liver vessels, appear dark relative to the background liver parenchyma. However, time of the acquisition can range anywhere from 10 to 60 minutes after contrast injection, depending on liver function, number of liver cells (hepatocytes), among other factors. If the image is acquired early, potentially malignant lesions may not be visible; if the image is acquired late, the patient must remain inside the scanner longer than necessary, which can introduce patient discomfort and additional costs to the institution. We developed a deep learning system that automatically determines the adequacy of a liver MR image for lesion detection, which can, in theory, be integrated into the scanner software to select the first adequate HBP acquisition.
Neural network-based affine liver registration. In clinical practice, radiologists typically manually register a pair of images (e.g., baseline and follow-up) to determine changes in the liver and focal liver lesions. However, due to patient positioning in the scanner, body habitus, and physiological motion, images can appear quite different, even when manually registered. The proposed algorithm is a fast and automated alternative, allowing for better colocalization of the liver and its anatomical structures.
Estimating Mountain Glacier Flowlines by Local Linear Regression Gradient Descent
Determining the flowline of a glacier by applying gradient descent to its corresponding elevation map. Combined with other methods for determining glacier termini, this approach facilitates the large-scale monitoring of mountain glaciers using satellite imaging.