On some problems of Brain disorder

Brain Tumor Classification from Radiology and Histopathology Data

In this work, we address the problem of brain tumor classification from radiology and histopathology data. A coarse-to-fine classification approach is adopted using a combination of deep features and Graph Convolution Network (GCN). As a first coarse step, we use 3D CNN to detect Glioblastoma from MRI images. In order to infer about Astrocytoma and Oligodendroglioma, Whole Slide Images (WSI) are employed in the second stage. During this fine classification stage, 2D CNN features are extracted at two different magnification levels. A graph is constructed with nodes as the concatenated feature embedding. Edges are constructed from feature similarity and graph topology. Finally, GCN is used with normalized graph Laplacian to ensure better relationaware-representation leading to more accurate classification. Experimental comparisons on the CPM-RadPath2020 challenge dataset with balanced accuracy score of 91.4% clearly demonstrate the state-of-the-art performance of our proposed strategy.

We propose an effective pipeline for multimodal tumor classification among glioblastoma, astrocytoma and oligodendroglioma using radiological and histopathological images. Our main contributions are listed below:

1) We apply a 3D CNN model for a coarse classification, i.e. glioblastoma vs. non-glioblastoma (which could be either astrocytoma or oligodendroglioma) from 3D MRI volumes.

2) We construct a deep feature extraction model for WSI using 2D CNN. Features from two different magnification levels of the WSI are treated as local and global features.

3) We employ Graph Convolutional Network (GCN) for fine classification non-glioblastoma into astrocytoma and oligodendroglioma. A feature vector combining the local and global features is used as a node. Edges are constructed by considering both feature similarity and graph topology.

Results

Dataset used

CPM-RadPath2020

Related publication

A. De, R. Mhatre, M. Tewari, A.S. Chowdhury: Brain tumor classification from Radiology and Histopathology using Deep Features and Graph Convolutional Network, Twenty-Sixth International Conference on Pattern Recognition (ICPR); Montreal, Canada (2022)

(a) 2D and 3D views of the input image with the tumor range marked in red circle,

(b) Volumetric view of the segmented tumor region inside the brain,

(c) 3D view of the segmented tumor only

Brain Tumor Segmentation using Deep Learning and Graph Cut

Brain tumor segmentation plays a key role in tumor diagnosis and surgical planning. In this paper, we propose a solution to the 3D brain tumor segmentation problem using deep learning and graph cut from the MRI data. In particular, the probability maps of a voxel to belong to the object (tumor) and background classes from the UNet are used to improve the energy function of the graph cut. We derive new expressions for the data term, the region term and the weight factor balancing the data term and the region term for individual voxels in our proposed model. We validate the performance of our model on the publicly available BRATS 2018 dataset. Our segmentation accuracy with a dice similarity score of 0.92 is found to be higher than that of the graph cut and the UNet applied in isolation as well as over a number of state of the art approaches.

Detection, delineation and characterisation of 3D brain tumors using MR imaging is found to be very important in guiding the treatment strategy. In this paper, we have shown how UNet and graph cut can be combined to achieve better segmentation performance in 3D. New expressions for the constituent terms in the graph cut energy function are explicitly derived with help of the probability maps obtained from the UNet. We have established through comprehensive experimentation that our proposed deep graph cut model yields competitive performance on the publicly available BRATS dataset.

Results

Dataset used

BRATS 2018

Related publication

A. De, M. Tewari, A.S. Chowdhury: A Deep Graph Cut Model for 3D Brain Tumor Segmentation, Forty-fourth International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Glasgow, Scotland, UK (2022) [Reprint]

Axial slices of EPI, FA and MD maps of two patients. (a), (b) (c) denote the slices of AD patient and (e), (f), (g) denotes that of healthy person. EPI and MD shows the WM in black, while FA shows the WM in white. We can see that in AD patient the total WM region in lesser than normal person which is clearly indicated by the larger size of lateral ventricle body in AD patient

Alzheimer's Disease Classification

Automated classification of Alzheimer’s disease (AD) plays a key role in the diagnosis of dementia. In this work, we solve for the first time a direct four-class classification problem, namely, AD, Normal Control (CN), Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) by processing Diffusion Tensor Imaging (DTI) in 3D. DTI provides information on brain anatomy in form of Fractional Anisotropy (FA) and Mean Diffusivity (MD) along with Echo Planar Imaging (EPI) intensities. We separately train CNNs, more specifically, VoxCNNs on FA values, MD values, and EPI intensities on 3D DTI scan volumes. In addition, we feed average FA and MD values for each brain region, derived according to the Colin27 brain atlas, into a random forest classifier (RFC). These four (three separately trained VoxCNNs and one RFC) models are first applied in isolation for the above four-class classification problem. Individual classification results are then fused at the decision level using a modulated rank averaging strategy leading to a classification accuracy of 92.6%. Comprehensive experimentation on publicly available ADNI database clearly demonstrates the effectiveness of the proposed solution.

Results

Dataset used

ADNI dataset

Related publication

De, A., & Chowdhury, A. S. (2021). DTI based Alzheimer’s disease classification with rank modulated fusion of CNNs and random forest. Expert Systems with Applications, 169, 114338. [Reprint]