Projects

Medical image registration:

Medical imaging is the study of the human organs by analyzing computed tomography (CT), magnetic resonance imaging (MRI), etc. Deformable image registration is a basic task in medical imaging, which is fundamental and critical to both diagnostics and therapeutics in precision and personalized medicine. Existing software packages for deformable registrations either provide only a forward deformation vector field (DVF), or render both forward and backward DVFs with time-intensive computation. The latter has multiple advantages in medical image analysis and processing. However, its latency, which is substantially longer than clinical time windows, hinders the transition of its benefits to clinical applications. Our study aims at facilitating the transition with algorithmic innovation. In my Ph.D. study, we introduce a novel registration approach, which substantiates a significant shift of the conventionally perceived tradeoff boundary between efficiency on the one side and functionality and accuracy on the other. In the new approach, we utilize efficient existing methods for forward DVF estimation. We complete symmetric registration with a backward DVF estimation, at high computation speed comparable to the forward DVF generation, and at high accuracy in inverse consistency as well as in registration. The forward DVF is possibly refined also in the symmetric augmentation or completion process. The efficacy of our approach is supported by theoretical analysis and empirical results. The key conceptual and algorithmic innovation is adaptive use of forward and backward inverse consistency (IC) residuals as feedbacks to refining DVF estimation. The forward IC residual was used heuristically in earlier work. We give theoretical explanation and conditions on when non-adaptive feedback succeed or fail. We provide furthermore a framework of algorithm design for DVF inversion with a simple adaptive feedback control mechanism. The use of backward IC residuals is original. The iteration with backward IC residuals as updates may be seen as an implicit Newton iteration, by convergence rate analysis. It has great advantages in simplicity, efficiency and robustness over the explicit Newton iteration, for DVF inversion. The algorithm framework is completed with convergence analysis, controllability condition, pre-evaluation of the initial forward DVF data, and post-evaluation of DVF estimates. Experiment results with our approach on synthetic data and real thoracic CT images show significant improvements in both registration and inverse-consistency errors. They are also in a remarkable agreement with the analysis-based predictions.

Resources: paper thesis code slides


Abnormality detection in chest radiographs:

Deep learning (DL) models are being deployed at medical centers to aid radiologists for diagnosis of lung conditions from chest radiographs. Such models are often trained on a large volume of publicly available labeled radiographs. These pre-trained DL models ability to generalize in clinical settings is poor because of the changes in data distributions between publicly available and privately held radiographs. In chest radiographs, the heterogeneity in distributions arises from the diverse conditions in X-ray equipment and their configurations used for generating the images. In the machine learning community, the challenges posed by the heterogeneity in the data generation source is known as domain shift, which is a mode shift in the generative model. In this work, we introduced a domain-shift detection and removal method to overcome this problem. We used a pre-trained DenseNet121 (Tang et al., 2020) for the abnormality detection, which was trained on a publicly available dataset, ChestXray14 (Wang et al., 2017), released by the National Institute of Health. We evaluated the abnormality detection results on the MIMIC-CXR dataset, another large publicly available chest radiograph source. We showed a significant classification improvement with domain-shift removal.

Resources: paper slides


The Effect of Image Resolution on Automated Classification of Chest X-rays

Deep learning models have received much attention lately for their ability to achieve expert-level performance on the accurate automated analysis of chest X-rays. Although publicly available chest X-ray datasets include high resolution images, most models are trained on reduced size images due to limitations on GPU memory and training time. As compute capability continues to advance, it will become feasible to train large convolutional neural networks on high-resolution images. To verify that this will lead to increased performance, we perform a systematic evaluation to measure the effect of input chest X-ray image resolution on accuracy. This study is based on the publicly available MIMIC-CXRJPG dataset, comprising 377,110 high resolution chest X-ray images, and provided with 14 labels to the corresponding free-text radiology reports. Our original hypothesis that increased resolution would lead to higher accuracy held true for some but not all of the tasks. We find, interestingly, that tasks that require a large receptive field are better suited to downscaled input images, and we verify this qualitatively by inspecting effective receptive fields and class activation maps of trained models. Finally, we show that stacking an ensemble across resolutions outperforms each individual learner at all input resolutions while providing interpretable scale weights, suggesting that multi-scale features are crucially important to information extraction from high-resolution chest X-rays.

Resources: paper