* indicates co-first authors
See full list of publications: Publications, Google Scholar
Abstract: Background: Computed tomography attenuation correction (CTAC) scans are routinely obtained during cardiac perfusion imaging, but currently only utilized for attenuation correction and visual calcium estimation. We aimed to develop a novel artificial intelligence (AI)-based approach to obtain volumetric measurements of chest body composition from CTAC scans and evaluate these measures for all-cause mortality (ACM) risk stratification. Methods: We applied AI-based segmentation and image-processing techniques on CTAC scans from a large international image-based registry (four sites), to define chest rib cage and multiple tissues. Volumetric measures of bone, skeletal muscle (SM), subcutaneous, intramuscular (IMAT), visceral (VAT), and epicardial (EAT) adipose tissues were quantified between automatically-identified T5 and T11 vertebrae. The independent prognostic value of volumetric attenuation, and indexed volumes were evaluated for predicting ACM, adjusting for established risk factors and 18 other body compositions measures via Cox regression models and Kaplan-Meier curves. Findings: End-to-end processing time was <2 minutes/scan with no user interaction. Of 9918 patients studied, 5451(55%) were male. During median 2.5 years follow-up, 610 (6.2%) patients died. High VAT, EAT and IMAT attenuation were associated with increased ACM risk (adjusted hazard ratio (HR) [95% confidence interval] for VAT: 2.39 [1.92, 2.96], p<0.0001; EAT: 1.55 [1.26, 1.90], p<0.0001; IMAT: 1.30 [1.06, 1.60], p=0.0124). Patients with high bone attenuation were at lower risk of death as compared to subjects with lower bone attenuation (adjusted HR 0.77 [0.62, 0.95], p=0.0159). Likewise, high SM volume index was associated with a lower risk of death (adjusted HR 0.56 [0.44, 0.71], p<0.0001). Interpretations: CTAC scans obtained routinely during cardiac perfusion imaging contain important volumetric body composition biomarkers which can be automatically measured and offer important additional prognostic value.
Abstract: This paper considers the problem of recovering signals modeled by generative models from linear measurements contaminated with sparse outliers. We propose an outlier detection approach for reconstructing the ground-truth signals by solving an ℓ1 norm minimization problem. We establish theoretical recovery guarantees for reconstruction of signals using generative models in the presence of outliers, giving lower bounds on the number of correctable outliers. Our results are applicable to both linear and nonlinear generator neural networks with an arbitrary number of layers. We propose an iterative and linearized alternating direction method of multipliers (ADMM) algorithm for solving the outlier detection problem via ℓ1 norm minimization, and a gradient descent algorithm for solving the outlier detection problem via squared ℓ1 norm minimization. We conduct extensive experiments using variational auto-encoder and deep convolutional generative adversarial networks, and the experimental results show that the signals can be successfully reconstructed under outliers using our approach. Our approach outperforms the traditional Lasso and ℓ2 norm minimization approach.
Abstract: Background and Aims Positron emission tomography (PET)/computed tomography (CT) myocardial perfusion imaging (MPI) is a vital diagnostic tool, especially in patients with cardiometabolic syndrome. Low-dose CT scans are routinely performed with PET for attenuation correction and potentially contain valuable data about body tissue composition. Deep learning and image processing were combined to automatically quantify skeletal muscle (SM), bone and adipose tissue from these scans and then evaluate their associations with death or myocardial infarction (MI). Methods In PET MPI from three sites, deep learning quantified SM, bone, epicardial adipose tissue (EAT), subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and intermuscular adipose tissue (IMAT). Sex-specific thresholds for abnormal values were established. Associations with death or MI were evaluated using unadjusted and multivariable models adjusted for clinical and imaging factors. Results This study included 10 085 patients, with median age 68 (interquartile range 59–76) and 5767 (57%) male. Body tissue segmentations were completed in 102 ± 4 s. Higher VAT density was associated with an increased risk of death or MI in both unadjusted [hazard ratio (HR) 1.40, 95% confidence interval (CI) 1.37–1.43] and adjusted (HR 1.24, 95% CI 1.19–1.28) analyses, with similar findings for IMAT, SAT, and EAT. Patients with elevated VAT density and reduced myocardial flow reserve had a significantly increased risk of death or MI (adjusted HR 2.49, 95% CI 2.23–2.77). Conclusions Volumetric body tissue composition can be obtained rapidly and automatically from standard cardiac PET/CT. This new information provides a detailed, quantitative assessment of sarcopenia and cardiometabolic health for physicians.
Abstract: We consider the problem of recovering the superposition of R distinct complex exponential functions from compressed non-uniform time-domain samples. Total variation (TV) minimization or atomic norm minimization was proposed in the literature to recover the R frequencies or the missing data. However, it is known that in order for TV minimization and atomic norm minimization to recover the missing data or the frequencies, the underlying R frequencies are required to be well separated, even when the measurements are noiseless. This paper shows that the Hankel matrix recovery approach can super-resolve the R complex exponentials and their frequencies from compressed non-uniform measurements, regardless of how close their frequencies are to each other. We propose a new concept of orthonormal atomic norm minimization (OANM), and demonstrate that the success of Hankel matrix recovery in separation-free super-resolution comes from the fact that the nuclear norm of a Hankel matrix is an orthonormal atomic norm. More specifically, we show that, in traditional atomic norm minimization, the underlying parameter values must be well separated to achieve successful signal recovery, if the atoms are changing continuously with respect to the continuously valued parameter. In contrast, for the OANM, it is possible the OANM is successful even though the original atoms can be arbitrarily close. As a byproduct of this research, we provide one matrix-theoretic inequality of nuclear norm, and give its proof using the theory of compressed sensing.
Abstract: Low-rank matrix recovery has found many applications in science and engineering such as machine learning, system identification, and Euclidean embedding. However, the lowrank matrix recovery problem is an NP hard problem and thus challenging. A commonly used heuristic approach is the nuclear norm minimization. Recently, some authors established the necessary and sufficient null space conditions for nuclear norm minimization to recover every possible low-rank matrix with rank at most r (the strong null space condition). Oymak et al. established a null space condition for successful recovery of a given low-rank matrix (the weak null space condition) using nuclear norm minimization, and derived the phase transition for the nuclear norm minimization. In this paper, we show that the weak null space condition proposed by Oymak et al. is only a sufficient condition for successful matrix recovery using nuclear norm minimization, and is not a necessary condition as claimed. We further give a weak null space condition for low-rank matrix recovery, which is both necessary and sufficient for the success of nuclear norm minimization. At the core of our derivation are an inequality for characterizing the nuclear norms of block matrices, and the conditions for equality to hold in that inequality.