Publications
All my publications are related to Biomedical Images, which can assist doctors, researchers or Lab assistants to detect or find abnormalities faster using AI.
All my publications are related to Biomedical Images, which can assist doctors, researchers or Lab assistants to detect or find abnormalities faster using AI.
Diabetic Retinopathy(DR) is one of the diseases, which is caused by damage to blood vessels of the light sensitive tissue of the eyes. DR is graded into five different levels; normal, mild, moderate, severe and proliferative. DR diagnosis demands more time on detection from fundus images. An accurate automatic model requires sufficient data for training which is unavailable. The open-source DR datasets are highly imbalanced between the levels of DR and it is also not easy to collect more data for proliferative cases. The synthetic generation of data for such highly imbalanced classes in a dataset provides better results on classification. In this paper, an analysis of classification results for the same is carried out with augmented proliferative class (highly imbalanced class) in the EYEPACS dataset using a generative adversarial network (GAN). We generated highly diverse images for proliferative cases without any constraints. The generated proliferative images do not influence other class images and have also improved the classification results obtained by the model, over that which was trained without synthetic generation. The results obtained before and after augmentation by the proposed generative based model is compared over various model attributes.
Worldwide 1.7 billion people suffer from various musculoskeletal conditions and it leads to severe disability and long-term pain. Due to the lack of limited qualified radiologists in various parts of the world, there is a need for an automatic framework that can accurately detect abnormalities in the radiograph images. Deep learning (DL) is very popular due to its capability of extracting useful features automatically with less human intervention, and it is used for solving various research problems in a wide range of fields like biomedical, cybersecurity, autonomous vehicles, etc. The convolutional neural network (CNN) based models are especially used in many biomedical applications because CNN is capable of automatic extraction of the location-invariant features from the input images. In this chapter, we look at the effectiveness of various CNN-based pretrained models for detecting abnormalities in radiographic images and compare their performances using standard statistical measures.
We will also analyze the performance of pretrained CNN architectures with respect to radiographic images on different regions of the body and discuss in detail the challenges of the data set. Standard CNN networks such as Xception, Inception v3, VGG-19, DenseNet, and MobileNet models are trained on radiograph images taken from the musculoskeletal radiographs (MURA) data set, which is given as an open challenge by Stanford machine learning (ML) group. It is the large data set of MURA that contains 40,561 images from 14,863 studies (9045 normal and 5818 abnormal studied) which represents various parts of the body such as the elbow, finger, forearm, hand, humerus, shoulder, and wrist. In this chapter, finger, wrist, and shoulder radiographs are considered for binary classification (normal, abnormal) due to the fact that data from these categories are less biased (less data imbalance) when compared to other categories. There are in total 23,241 and 1683 images given as train and valid set in this data set for the three categories considered in the present work. In the experimental analysis, the performance of the models are measured using statistical measures such as accuracy, precision, recall and F1-score.
Accurate automatic Identification and localization of spine vertebrae points in CT scan images is crucial in medical diagnosis. This paper presents an automatic feature extraction network, based on transfer learned CNN, in order to handle the availability of limited samples. The 3D vertebrae centroids are identified and localized by an LSTM network, which is trained on CNN features extracted from 242 CT spine sequences. The model is further trained to estimate age and gender from LSTM features. Thus, we present a framework that serves as a multi-task data driven model for identifying and localizing spine vertebrae points, age estimation and gender classification. The proposed approach is compared with benchmark results obtained by testing 60 scans. The advantage of the multi-task framework is that it does not need any additional information other than the annotations on the spine images indicating the presence of vertebrae points.