Combining large multi-center datasets can enhance statistical power, particularly in the field of neurology, where data can be scarce. However, applying a deep learning model trained on existing neuroimaging data often leads to inconsistent results when tested on new data due to domain shift caused by differences between the training (source domain) and testing (target domain) data. Existing literature offers several solutions based on domain adaptation (DA) techniques, which ignore complex practical scenarios where heterogeneity may exist in the source or target domain. This study proposes a new perspective in solving the domain shift issue for MRI data by identifying and addressing the dominant factor causing heterogeneity in the dataset. We design an unsupervised DA method leveraging the maximum mean discrepancy and correlation alignment loss in order to align domain-invariant features. Instead of regarding the entire dataset as a source or target domain, the dataset is processed based on the dominant factor of data variations, which is the scanner manufacturer. Afterwards, the target domain's feature space is aligned pairwise with respect to each source domain's feature map. Experimental results demonstrate significant performance gain for multiple inter- and intra-study neurodegenerative disease classification tasks. GitHub source code available at https://github.com/rkushol/DAMS.
Publication:
In medical research and clinical applications, the utilization of MRI datasets from multiple centers has become increasingly prevalent. However, inherent variability between these centers presents challenges due to domain shift, which can impact the quality and reliability of the analysis. Regrettably, the absence of adequate tools for domain shift analysis hinders the development and validation of domain adaptation and harmonization techniques. To address this issue, this paper presents a novel Domain Shift analyzer for MRI (DSMRI) framework designed explicitly for domain shift analysis in multi-center MRI datasets. The proposed model assesses the degree of domain shift within an MRI dataset by leveraging various MRI-quality-related metrics derived from the spatial domain. DSMRI also incorporates features from the frequency domain to capture low- and high-frequency information about the image. It further includes the wavelet domain features by effectively measuring the sparsity and energy present in the wavelet coefficients. Furthermore, DSMRI introduces several texture features, thereby enhancing the robustness of the domain shift analysis process. The proposed framework includes visualization techniques such as t-SNE and UMAP to demonstrate that similar data are grouped closely while dissimilar data are in separate clusters. Additionally, quantitative analysis is used to measure the domain shift distance, domain classification accuracy, and the ranking of significant features. The effectiveness of the proposed approach is demonstrated using experimental evaluations on seven large-scale multi-site neuroimaging datasets.
GitHub source code available at https://github.com/rkushol/DSMRI.
Publication:
Deep learning has become a leading subset of machine learning and has been successfully employed in diverse areas, ranging from natural language processing to medical image analysis. In medical imaging, researchers have progressively turned towards multi-center neuroimaging studies to address complex questions in neuroscience, leveraging larger sample sizes and aiming to enhance the accuracy of deep learning models. However, variations in image pixel/voxel characteristics can arise between centers due to factors including differences in magnetic resonance imaging scanners. Such variations create challenges, particularly inconsistent performance in machine learning-based approaches, often referred to as domain shift, where the trained models fail to achieve satisfactory or improved results when confronted with dissimilar test data. This study analyzes the performance of multiple disease classification tasks using multi-center MRI data obtained from three widely used scanner manufacturers (GE, Philips, and Siemens) across several deep learning-based networks. Furthermore, we investigate the efficacy of mitigating scanner vendor effects using ComBat-based harmonization techniques when applied to multi-center datasets of 3D structural MR images. Our experimental results reveal a substantial decline in classification performance when models trained on one type of scanner manufacturer are tested with data from different manufacturers. Moreover, despite applying ComBat-based harmonization, the harmonized images do not demonstrate any noticeable performance enhancement for disease classification tasks.
GitHub project page: https://github.com/rkushol/Effects-of-MRI-scanner-manufacturer.
Publication:
Amyotrophic Lateral Sclerosis (ALS) is a complex neurodegenerative disorder characterized by motor neuron degeneration. Significant research has begun to establish brain magnetic resonance imaging (MRI) as a potential biomarker to diagnose and monitor the state of the disease. Deep learning has emerged as a prominent class of machine learning algorithms in computer vision and has shown successful applications in various medical image analysis tasks. However, deep learning methods applied to neuroimaging have not achieved superior performance in classifying ALS patients from healthy controls due to insignificant structural changes correlated with pathological features. Thus, a critical challenge in deep models is to identify discriminative features from limited training data. To address this challenge, this study introduces a framework called SF2Former, which leverages the power of the vision transformer architecture to distinguish ALS subjects from the control group by exploiting the long-range relationships among image features. Additionally, spatial and frequency domain information is combined to enhance the network’s performance, as MRI scans are initially captured in the frequency domain and then converted to the spatial domain. The proposed framework is trained using a series of consecutive coronal slices and utilizes pre-trained weights from ImageNet through transfer learning. Finally, a majority voting scheme is employed on the coronal slices of each subject to generate the final classification decision. The proposed architecture is extensively evaluated with multi-modal neuroimaging data (i.e., T1-weighted, R2*, FLAIR) using two well-organized versions of the Canadian ALS Neuroimaging Consortium (CALSNIC) multi-center datasets. The experimental results demonstrate the superiority of the proposed strategy in terms of classification accuracy compared to several popular deep learning-based techniques.
GitHub source code available at https://github.com/rkushol/ADDFormer.
Publication:
Alzheimer’s disease is the most prevalent neurodegenerative disorder characterized by degeneration of the brain. It is classified as a brain disease causing dementia that presents with memory loss and cognitive impairment. Experts primarily use brain imaging and other tests to rule out the disease. To automatically detect Alzheimer’s patients from healthy controls, this study adopts the vision transformer architecture, which can effectively capture the global or long-range relationship of image features. To further enhance the network’s performance, frequency and image domain features are fused together since MRI data is acquired in the frequency domain before being transformed to images. We train the model with selected coronal 2D slices to leverage the transfer learning property of pre-training the network using ImageNet. Finally, the majority voting of the coronal slices of an individual subject is used to generate the final classification score. Our proposed method has been evaluated on the publicly available benchmark dataset ADNI. The experimental results demonstrate the advantage of our proposed approach in terms of classification accuracy compared with that of the state-of-the-art methods.
GitHub source code available at https://github.com/rkushol/ADDFormer.
Publication:
The improper circulation of blood flow inside the retinal vessel is the primary source of most of the optical disorders including partial vision loss and blindness. Accurate blood vessel segmentation of the retinal image is utilized for biometric identification, computer-assisted laser surgical procedure, automatic screening, and diagnosis of ophthalmologic diseases like Diabetic retinopathy, Age-related macular degeneration, Hypertensive retinopathy, and so on. Proper identification of retinal blood vessels at its early stage assists medical experts to take expedient treatment procedures which could mitigate potential vision loss. The detailed and comprehensive experiments operated on two benchmark and publicly available retinal color image databases (DRIVE and STARE) prove the effectiveness of the proposed approaches, where the average accuracy for vessel segmentation accomplished approximately 95%.
Publications:
1. An efficient multiscale directional representation technique Bendlets. [Link]
2. Rbvs-net: A robust convolutional neural network for retinal blood vessel segmentation. [Link]
In medical images, contrast-enhanced view helps to separate bones, vessels, tumors, and different kinds of soft tissue. We introduce a new dynamic method for enhancing the contrast of medical images using some morphological operators like top-hat and bottom-hat transform, where the value of the structuring element is selected automatically from the measurement of the Edge Content-based contrast matrix.
Publications:
1. Top-hat and Bottom-hat Transform with Optimal SE. [Link]
2. Image Enhancement for X-ray images. [Link]
Due to the advancements in computer technology, smart cameras, and image manipulation software, digital image forgery is becoming acute day by day, and instead of expressing the actual information, the digital image is misleading the viewers. We analyze both local and global image features to efficiently find the similarity or forgery within a high-resolution image.
Publications:
1. A Circular Block Approach. [Link]
2. Color Space and Moment Invariants-Based Feature Approach. [Link]