Proposed the first optical physically unclonable function (PUF)–based authentication system for integrated circuit (IC) chips, leveraging the unique microstructure of chip packaging surfaces and enabling authentication using images captured by consumer-grade scanners and mobile cameras.
Implemented an efficient, lightweight video-based verification scheme that utilizes robust specular-reflection features—offering significantly lower error rates and operational complexity compared to diffuse-reflection-based or electronic PUF methods.
This work introduces a novel authentication method that utilizes indoor lighting instead of camera flash for capturing the microstructure of paper, enhancing safety in workplace environments. By creating a digital twin (DT) that simulates paper patches under various lighting conditions and adheres to key optical laws, this method navigates the challenges posed by weaker light and secondary reflections.
Brain tumors are classified as primary (originating in the brain) or secondary (metastasizing from elsewhere). Gliomas are the most prevalent malignant primary brain tumor in adults, accounting for 80%. According to the WHO, gliomas are classified into four grades: low grade (LGG) (grades 1-2), which are less prevalent and have low blood concentration and sluggish growth, and high grade (HGG) (grades 3-4), which have rapid growth and aggressiveness.
Our proposed model is a cascaded encoder-decoder network with two stages. In both stages of training, a variational autoencoder branch is included. In addition, a transformer module is incorporated into the bottleneck layer to account for long-range dependencies. An Attention gate is incorporated in the second stage to assist the network in segmenting smaller tumor patches. This block increases the dice score for smaller sub-regions of glioma, such as the tumor that is enhancing. Ultimately, our suggested technique is validated using the BRATS-2020 benchmark dataset. Our method yields equivalent results in comparison to the standard methods. Specifically, 87.09 percent, 80.32 percent, and 74.63 percent dice scores are obtained when segmenting the entire tumor (WT), tumor core (TC), and enhanced tumor (ET), respectively. An Ablation study is also undertaken to better comprehend the generalization of the design.
Despite the combined effort, the COVID-19 pandemic continues with a devastating effect on the healthcare system and the well-being of the world population. With a lack of RT-PCR testing facilities, one of the screening approaches has been the use of is chest radiography. In this paper, we propose an automatic chest x-ray image classification mode that utilizes the pre-trained CNN architecture (DenseNet121, MobileNetV2) as a feature extractor, and wavelet transformation of the pre-processed images using the CLAHE algorithm and SOBEL edge detection. Our model can detect COVID-19 from x-ray images with high accuracy, sensitivity, specificity, and precision. The result analysis of different architectures and a comparison study of pre-processing techniques (Histogram Equalization and Edge Detection) are thoroughly examined. In this experiment, the Support Vector Machine (SVM) classifier fitted most accurately (accuracy 97.73%, sensitivity 97.84%, F1- score 97.73%, specificity 97.73%, and precision 98.79%) with a wavelet and MobileNetV2 feature sets to identify COVID-19. The memory consumption is also examined to make the model more feasible for telemedicine and mobile healthcare application.
Abnormality detection in the behaviour of ground and aerial systems is a challenging task specially in an unsupervised way. Embedded sensors such as Inertial Measurement Unit and digital camera are used to gather information regarding motion of those ground and aerial systems in real time. In this paper, we focus on building an intelligent and heterogenous autonomous system that can detect abnormalities from that information. We have proposed two novel methods for the task, one for the sensor data and the other for the image data. We have used an LSTM Autoencoder for the sensor data and an optical flow based Conv-Autoencoder for the image data along with a mathematical model for the abnormality score. The LSTM model is capable of pinpointing the reason behind abnormality and can also give predictions in real time. Both of our image and sensor models are robust to noise and provide a continuous measure of anomaly score based upon the severity of incidents.
Traffic surveillance and monitoring using fisheye cameras are gaining popularity because of the 360-degree wide-angle view. The accuracy of vehicle detection from fisheye images has been significantly improved by using high-performance deep learning-based object detectors. However, one key area of improvement remains in detecting vehicles from night-time images due to the low brightness and contrast against the background. We have found that it is possible to make night images suitable for the model to learn from by intentionally blurring out selective portions of the images before training. In our proposed technique termed as SelectBlur, we first divide a night image into square grids and depending on whether a grid meets certain conditions, we decide on blurring it. It is shown that blurring out parts of the image that are known to not contain any vehicles leads to significantly improved performance. The SelectBlur, in conjunction with state-of-the-art object detectors, such as Yolov5x and Yolov5s, beats the baseline model without pre-processing by 5.3% and 3.0% improvement in mean average precision, respectively. An ablation study of our proposed algorithm is also performed by considering different conditions for blurring, and also varying the type and the size of the kernel used to perform the blurring operation.
In the sector of screening thoracic diseases till now chest X-rays are the most affordable in the radio-logical examination. The pathological information usually depends on the lung and heart regions. However, model training mainly relies on image-level class labels in a weakly supervised manner, which is quite challenging for computer-aided chest X-ray screening. Regarding the continuity, some methods have been come across recently to determine local regions containing pathological information, usually vital for thoracic diseases. We propose a novel deep framework for the multi-label classification of thoracic diseases in chest X-ray images. To exploit disease-specific cues effectively, we locate lung and heart regions containing pathological information by a well-trained pixel-wise segmentation model to generate masks. Compared to existing methods fusing global and local features, we adopt feature weighting to avoid weakening visual cues unique to lung and heart regions. For training such systems, existing deep learning-based algorithms frequently require substantial supervision, such as annotated bounding boxes, which are difficult to gather on a large scale. We offer PCAM pooling, a unique global pooling procedure for lesion localization that requires just image-level supervision. Our method with pixel-wise segmentation can help overcome the deviation of locating local regions. Evaluated by the benchmark split on the publicly available chest X-ray14 dataset, the comprehensive experiments show that a DenseNet-121 model trained with PCAM pooling beats state-of-the-art baselines. When compared to the localization heat-map obtained by CAM, the probability maps generated by PCAM pooling exhibit distinct and crisp boundaries around lesions. Our proposed network aims to effectively exploit pathological regions containing the main cues for chest X-ray screening.