The framework of our unified system for jointly prostate detection, ADC-T2w registration and CS PCa localization.
Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions’ locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.
Wang Z. W., Liu C. Y., Wang L., Yang X.*, Cheng K.-T. Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images based on an End-to-End Deep Neural Network, IEEE Trans. on Medical Imaging, 2018.
Framework of automated system for PCa diagnosis
Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444–55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083–92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403–13, Niaf et al 2014 IEEE Trans. Image Process. 23 979–91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787–96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to ‘see’ the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying onhandcrafted features.
Le M. H., Chen J. Y., Wang L., Cheng K.-T., Yang X.* Automated Diagnosis of Prostate Cancer in Multi-parametric MRI based on Multimodal Convolutional Neural Networks, Physics in medicine and Biology, 2017.
Architecture of our multimodal CNNs for joint PCa detection and diagnosis.
This paper presents an automated method for jointly localizing prostate cancer (PCa) in multi-parametric MRI (mp-MRI) images and assessing the aggressiveness of detected lesions. Our method employs multimodal multi-label convolutional neural networks (CNNs), which are trained in a weakly-supervised manner by providing a set of prostate images with image-level labels without priors of lesions’ locations. By distinguishing images with different labels, discriminative visual patterns related to indolent PCa and clinically significant (CS) PCa are automatically learned from clutters of prostate tissues. Cancer response maps (CRMs) with each pixel indicating the likelihood of being part of indolent/CS are explicitly generated at the last convolutional layer. We define new back-propagate error of CNN to enforce both optimized classification results and consistent CRMs for different modalities. Our method enables the feature learning processes of different modalities to mutually influence each other and, in turn yield more representative features. Comprehensive evaluation based on 402 lesions demonstrates superior performance of our method to the state-of-the-art method.
Yang X*, Wang Z.W., Liu C. Y., Le M.H., Chen J. Y., Wang L., Cheng K. -T. Joint Detection and Diagnosis of Prostate Cancer in Multi-parametric MRI based on Multimodal Convolutional Neural Networks, International Conference on Medical Imaging Computing and Computer Assisted Interventions, 2017.
The framework of the automated PCa localization system
Multi-parameter magnetic resonance imaging (mp-MRI) is increasingly popular for prostate cancer (PCa) detection and diagnosis. However, interpreting mp-MRI data which typically contains multiple unregistered 3D sequences, e.g. apparent diffusion coefficient (ADC) and T2-weighted (T2w) images, is time-consuming and demands special expertise, limiting its usage for large-scale PCa screening. Therefore, solutions to computer-aided detection of PCa in mp-MRI images are highly desirable. Most recent advances in automated methods for PCa detection employ a handcrafted feature based two-stage classification flow, i.e. voxel-level classification followed by a region-level classification. This work presents an automated PCa detection system which can concurrently identify the presence of PCa in an image and localize lesions based on deep convolutional neural network (CNN) features and a single-stage SVM classifier. Specifically, the developed co-trained CNNs consist of two parallel convolutional networks for ADC and T2w images respectively. Each network is trained using images of a single modality in a weakly-supervised manner by providing a set of prostate images with image-level labels indicating only the presence of PCa without priors of lesions’ locations. Discriminative visual patterns of lesions can be learned effectively from clutters of prostate and surrounding tissues. A cancer response map with each pixel indicating the likelihood to be cancerous is explicitly generated at the last convolutional layer of the network for each modality. A new back-propagated error E is defined to enforce both optimized classification results and consistent cancer response maps for different modalities, which help capture highly representative PCa-relevant features during the CNN feature learning process. The CNN features of each modality are concatenated and fed into a SVM classifier. For images which are classified to contain cancers, non-maximum suppression and adaptive thresholding are applied to the corresponding cancer response maps for PCa foci localization. Evaluation based on 160 patient data with 12-core systematic TRUS-guided prostate biopsy as the reference standard demonstrates that our system achieves a sensitivity of 0.46, 0.92 and 0.97 at 0.1, 1 and 10 false positives per normal/benign patient which is significantly superior to two state-of-the-art CNN-based methods (Oquab et al., 2015; Zhou et al., 2015) and 6-core systematic prostate biopsies.
Yang X., Liu C. Y., Wang Z. W.*, Yang J., Le M. H., Wang L., Cheng K. –T. Co-trained Convolutional Neural Networks for Automated Detection of Prostate Cancer in Multi-parametric MRI, Medical Image Analysis, 2017. [Github]
The framework of the DeepCADx.
In this paper, we present DeepCADx, a computer-aided prostate detection and diagnosis (CADx) system powered by a novel deep convolutional neural networks (CNNs). Specifically, the developed DeepCADx system processes multi-parametric magnetic resonance imaging (mp-MRI) sequences in three major steps: 1) pre-processing which registers images from different modalities and detect prostates, 2) multimodal CNNs which jointly identifies images containing prostate cancers (PCa) and generate cancer response maps (CRM) with each pixel indicating the probability to be cancerous, and 3) post-processing which localize lesion in CRMs and assess the aggressiveness (i.e. Gleason score) of each localized lesion using multimodal CNN features and a 5-class SVM classifier.
Wang, Z. W., Liu C. Y., Bai X., Yang X.* DeepCADx: Automated Prostate Cancer Detection and Diagnosis in mp-MRI based on Multimodal Convolutional Neural Networks, ACM Multimedia (Demo), 2017.